Test Report: Docker_Linux_crio 21724

                    
                      4b13c2b2494da1f236451a26339f84fcb7875a27:2025-10-10:41851
                    
                

Test fail (37/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.24
35 TestAddons/parallel/Registry 14.61
36 TestAddons/parallel/RegistryCreds 0.37
37 TestAddons/parallel/Ingress 148.86
38 TestAddons/parallel/InspektorGadget 6.23
39 TestAddons/parallel/MetricsServer 5.29
41 TestAddons/parallel/CSI 33.62
42 TestAddons/parallel/Headlamp 2.45
43 TestAddons/parallel/CloudSpanner 5.24
44 TestAddons/parallel/LocalPath 13.14
45 TestAddons/parallel/NvidiaDevicePlugin 6.28
46 TestAddons/parallel/Yakd 5.24
47 TestAddons/parallel/AmdGpuDevicePlugin 6.23
98 TestFunctional/parallel/ServiceCmdConnect 602.81
117 TestFunctional/parallel/ServiceCmd/DeployApp 600.66
138 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.85
139 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.86
140 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.77
141 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.29
143 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.18
144 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.32
153 TestFunctional/parallel/ServiceCmd/HTTPS 0.54
154 TestFunctional/parallel/ServiceCmd/Format 0.53
155 TestFunctional/parallel/ServiceCmd/URL 0.53
191 TestJSONOutput/pause/Command 2.34
197 TestJSONOutput/unpause/Command 1.65
294 TestPause/serial/Pause 6.94
342 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.14
349 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.38
353 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.18
363 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.14
367 TestStartStop/group/old-k8s-version/serial/Pause 5.92
377 TestStartStop/group/no-preload/serial/Pause 6.28
379 TestStartStop/group/embed-certs/serial/Pause 7.19
381 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.05
388 TestStartStop/group/newest-cni/serial/Pause 5.33
392 TestStartStop/group/default-k8s-diff-port/serial/Pause 5.59
x
+
TestAddons/serial/Volcano (0.24s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-594989 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-594989 addons disable volcano --alsologtostderr -v=1: exit status 11 (239.475791ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 17:32:43.577007   19100 out.go:360] Setting OutFile to fd 1 ...
	I1010 17:32:43.577387   19100 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:32:43.577403   19100 out.go:374] Setting ErrFile to fd 2...
	I1010 17:32:43.577410   19100 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:32:43.577626   19100 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 17:32:43.577961   19100 mustload.go:65] Loading cluster: addons-594989
	I1010 17:32:43.578470   19100 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:32:43.578491   19100 addons.go:606] checking whether the cluster is paused
	I1010 17:32:43.578618   19100 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:32:43.578635   19100 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:32:43.579184   19100 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:32:43.598368   19100 ssh_runner.go:195] Run: systemctl --version
	I1010 17:32:43.598430   19100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:32:43.616685   19100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:32:43.713691   19100 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 17:32:43.713779   19100 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 17:32:43.743323   19100 cri.go:89] found id: "5e95cdad968221cb4aa3e3f82adc548bb3d5b365829bff504b6b9205dce0e7fd"
	I1010 17:32:43.743343   19100 cri.go:89] found id: "d699fc1ff60deb831fc7ac36084436101b2b6f7f34bc49bb7395f67303eddd87"
	I1010 17:32:43.743346   19100 cri.go:89] found id: "f9378118d907d056ea9eef46a4fc61abb3f92e4f6c26d4b923dfdde2abf957d2"
	I1010 17:32:43.743350   19100 cri.go:89] found id: "678a2f9830be76f38c2b341c8e72305078d85125c2e54db3f06c6813bd0a0d9a"
	I1010 17:32:43.743352   19100 cri.go:89] found id: "ad42a3a9aced05fb033bff70ddc9ef71d5b204c2f932be8648ba95c3746e71c2"
	I1010 17:32:43.743357   19100 cri.go:89] found id: "b55d72508fae28446486a54619cff08b4afb5c385f6a5e4eac89e3cfebc91592"
	I1010 17:32:43.743360   19100 cri.go:89] found id: "901121a197604a1c522a2d4628a282334fc05ad33f3fdcb043724310be152785"
	I1010 17:32:43.743362   19100 cri.go:89] found id: "071e94df1917ca297f7279600d3626289e55027384ba88081d3c6d75e7c1e418"
	I1010 17:32:43.743365   19100 cri.go:89] found id: "c9f157c8634805afa056c3c518497da9987b920da8bf0ac132118ed4e4ef8ea9"
	I1010 17:32:43.743374   19100 cri.go:89] found id: "6031890f647ecd1229bdaba7d90fb473ac9a8831d40666fdbc09bd914ca1987a"
	I1010 17:32:43.743377   19100 cri.go:89] found id: "7325b7e01b3666ecc0095ccec9564f71a65648e8cf9ce1d9e1915c7d1eaa574a"
	I1010 17:32:43.743380   19100 cri.go:89] found id: "19825e2ee8b346bd47414fd8e5247ef52e5b2e32f3eb7196eef394c88fd2275f"
	I1010 17:32:43.743382   19100 cri.go:89] found id: "22fc52febdf0cdaac8c2a5aad2960659c8a0115782e452ba76c5328f526e478c"
	I1010 17:32:43.743385   19100 cri.go:89] found id: "b770cbeea4ac5736e7c6c1c1e37f4cf430284066d47389ba72462e3d59a6fc36"
	I1010 17:32:43.743387   19100 cri.go:89] found id: "a6f2b6c587bcccac9182b5ea634d295f575138b032fc36b618e6fd522dd3434a"
	I1010 17:32:43.743391   19100 cri.go:89] found id: "0e80700c177774e91775e01b322ba4b7c3ad23f691a7e8ef083285a034138f33"
	I1010 17:32:43.743393   19100 cri.go:89] found id: "8cab3f92e9e88acd6ccdda17457d84c2208a2db36f61d1769c76a89b89d5c06c"
	I1010 17:32:43.743397   19100 cri.go:89] found id: "8cb0bc1946c2ed4da67ec55c8c6c99b35b9087ba2260de09913804f39b37e9aa"
	I1010 17:32:43.743399   19100 cri.go:89] found id: "a664c4cd86a07ae3da31b7161b1ffcb861502990a087baf33bb177718c331505"
	I1010 17:32:43.743402   19100 cri.go:89] found id: "4f4668380d0085c1200a82058cc3e69994ce54d202cd46003f4aeb1592745336"
	I1010 17:32:43.743404   19100 cri.go:89] found id: "c1f6da858e936ea82a29d1bd82704a1271d21a8c3ef131087c8a1ffc041f909d"
	I1010 17:32:43.743406   19100 cri.go:89] found id: "03911015ab5c009ca66dea098febf11b6b587cc8665f0cf85bf894a7d24caf04"
	I1010 17:32:43.743409   19100 cri.go:89] found id: "8643869dd690c538be7e9ae88ed91a5133d80d777f7f05864080e0071de6ce07"
	I1010 17:32:43.743412   19100 cri.go:89] found id: "426cb7351d8b7ffa8dc04159ce227020bab5b313130d3d7ea54e381c5e1ff403"
	I1010 17:32:43.743415   19100 cri.go:89] found id: ""
	I1010 17:32:43.743453   19100 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 17:32:43.757842   19100 out.go:203] 
	W1010 17:32:43.759191   19100 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:32:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:32:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1010 17:32:43.759210   19100 out.go:285] * 
	* 
	W1010 17:32:43.762348   19100 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 17:32:43.763670   19100 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-594989 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.24s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 2.974659ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-6gl8m" [7340a6ae-2ed3-4269-8f05-53911db0a12c] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002685682s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-8mr65" [9b638779-096b-4de7-a496-dfbca677f32f] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002481903s
addons_test.go:392: (dbg) Run:  kubectl --context addons-594989 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-594989 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-594989 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.160682532s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-594989 ip
2025/10/10 17:33:08 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-594989 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-594989 addons disable registry --alsologtostderr -v=1: exit status 11 (252.548159ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 17:33:08.979497   21798 out.go:360] Setting OutFile to fd 1 ...
	I1010 17:33:08.979828   21798 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:33:08.979841   21798 out.go:374] Setting ErrFile to fd 2...
	I1010 17:33:08.979847   21798 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:33:08.980171   21798 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 17:33:08.980520   21798 mustload.go:65] Loading cluster: addons-594989
	I1010 17:33:08.981034   21798 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:33:08.981070   21798 addons.go:606] checking whether the cluster is paused
	I1010 17:33:08.981183   21798 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:33:08.981200   21798 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:33:08.981589   21798 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:33:09.000568   21798 ssh_runner.go:195] Run: systemctl --version
	I1010 17:33:09.000630   21798 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:33:09.019940   21798 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:33:09.121132   21798 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 17:33:09.121215   21798 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 17:33:09.152407   21798 cri.go:89] found id: "5e95cdad968221cb4aa3e3f82adc548bb3d5b365829bff504b6b9205dce0e7fd"
	I1010 17:33:09.152432   21798 cri.go:89] found id: "d699fc1ff60deb831fc7ac36084436101b2b6f7f34bc49bb7395f67303eddd87"
	I1010 17:33:09.152438   21798 cri.go:89] found id: "f9378118d907d056ea9eef46a4fc61abb3f92e4f6c26d4b923dfdde2abf957d2"
	I1010 17:33:09.152443   21798 cri.go:89] found id: "678a2f9830be76f38c2b341c8e72305078d85125c2e54db3f06c6813bd0a0d9a"
	I1010 17:33:09.152447   21798 cri.go:89] found id: "ad42a3a9aced05fb033bff70ddc9ef71d5b204c2f932be8648ba95c3746e71c2"
	I1010 17:33:09.152453   21798 cri.go:89] found id: "b55d72508fae28446486a54619cff08b4afb5c385f6a5e4eac89e3cfebc91592"
	I1010 17:33:09.152457   21798 cri.go:89] found id: "901121a197604a1c522a2d4628a282334fc05ad33f3fdcb043724310be152785"
	I1010 17:33:09.152461   21798 cri.go:89] found id: "071e94df1917ca297f7279600d3626289e55027384ba88081d3c6d75e7c1e418"
	I1010 17:33:09.152463   21798 cri.go:89] found id: "c9f157c8634805afa056c3c518497da9987b920da8bf0ac132118ed4e4ef8ea9"
	I1010 17:33:09.152473   21798 cri.go:89] found id: "6031890f647ecd1229bdaba7d90fb473ac9a8831d40666fdbc09bd914ca1987a"
	I1010 17:33:09.152475   21798 cri.go:89] found id: "7325b7e01b3666ecc0095ccec9564f71a65648e8cf9ce1d9e1915c7d1eaa574a"
	I1010 17:33:09.152478   21798 cri.go:89] found id: "19825e2ee8b346bd47414fd8e5247ef52e5b2e32f3eb7196eef394c88fd2275f"
	I1010 17:33:09.152481   21798 cri.go:89] found id: "22fc52febdf0cdaac8c2a5aad2960659c8a0115782e452ba76c5328f526e478c"
	I1010 17:33:09.152483   21798 cri.go:89] found id: "b770cbeea4ac5736e7c6c1c1e37f4cf430284066d47389ba72462e3d59a6fc36"
	I1010 17:33:09.152486   21798 cri.go:89] found id: "a6f2b6c587bcccac9182b5ea634d295f575138b032fc36b618e6fd522dd3434a"
	I1010 17:33:09.152490   21798 cri.go:89] found id: "0e80700c177774e91775e01b322ba4b7c3ad23f691a7e8ef083285a034138f33"
	I1010 17:33:09.152492   21798 cri.go:89] found id: "8cab3f92e9e88acd6ccdda17457d84c2208a2db36f61d1769c76a89b89d5c06c"
	I1010 17:33:09.152497   21798 cri.go:89] found id: "8cb0bc1946c2ed4da67ec55c8c6c99b35b9087ba2260de09913804f39b37e9aa"
	I1010 17:33:09.152499   21798 cri.go:89] found id: "a664c4cd86a07ae3da31b7161b1ffcb861502990a087baf33bb177718c331505"
	I1010 17:33:09.152503   21798 cri.go:89] found id: "4f4668380d0085c1200a82058cc3e69994ce54d202cd46003f4aeb1592745336"
	I1010 17:33:09.152507   21798 cri.go:89] found id: "c1f6da858e936ea82a29d1bd82704a1271d21a8c3ef131087c8a1ffc041f909d"
	I1010 17:33:09.152516   21798 cri.go:89] found id: "03911015ab5c009ca66dea098febf11b6b587cc8665f0cf85bf894a7d24caf04"
	I1010 17:33:09.152524   21798 cri.go:89] found id: "8643869dd690c538be7e9ae88ed91a5133d80d777f7f05864080e0071de6ce07"
	I1010 17:33:09.152528   21798 cri.go:89] found id: "426cb7351d8b7ffa8dc04159ce227020bab5b313130d3d7ea54e381c5e1ff403"
	I1010 17:33:09.152532   21798 cri.go:89] found id: ""
	I1010 17:33:09.152578   21798 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 17:33:09.170786   21798 out.go:203] 
	W1010 17:33:09.172533   21798 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:33:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:33:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1010 17:33:09.172555   21798 out.go:285] * 
	* 
	W1010 17:33:09.177742   21798 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 17:33:09.179595   21798 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-594989 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.61s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.37s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 2.746763ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-594989
addons_test.go:332: (dbg) Run:  kubectl --context addons-594989 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-594989 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-594989 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (228.049749ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 17:33:15.604606   22586 out.go:360] Setting OutFile to fd 1 ...
	I1010 17:33:15.604750   22586 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:33:15.604759   22586 out.go:374] Setting ErrFile to fd 2...
	I1010 17:33:15.604762   22586 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:33:15.604948   22586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 17:33:15.605238   22586 mustload.go:65] Loading cluster: addons-594989
	I1010 17:33:15.605553   22586 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:33:15.605567   22586 addons.go:606] checking whether the cluster is paused
	I1010 17:33:15.605642   22586 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:33:15.605654   22586 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:33:15.605991   22586 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:33:15.623184   22586 ssh_runner.go:195] Run: systemctl --version
	I1010 17:33:15.623232   22586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:33:15.640842   22586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:33:15.736516   22586 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 17:33:15.736604   22586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 17:33:15.766229   22586 cri.go:89] found id: "5e95cdad968221cb4aa3e3f82adc548bb3d5b365829bff504b6b9205dce0e7fd"
	I1010 17:33:15.766261   22586 cri.go:89] found id: "d699fc1ff60deb831fc7ac36084436101b2b6f7f34bc49bb7395f67303eddd87"
	I1010 17:33:15.766266   22586 cri.go:89] found id: "f9378118d907d056ea9eef46a4fc61abb3f92e4f6c26d4b923dfdde2abf957d2"
	I1010 17:33:15.766269   22586 cri.go:89] found id: "678a2f9830be76f38c2b341c8e72305078d85125c2e54db3f06c6813bd0a0d9a"
	I1010 17:33:15.766272   22586 cri.go:89] found id: "ad42a3a9aced05fb033bff70ddc9ef71d5b204c2f932be8648ba95c3746e71c2"
	I1010 17:33:15.766275   22586 cri.go:89] found id: "b55d72508fae28446486a54619cff08b4afb5c385f6a5e4eac89e3cfebc91592"
	I1010 17:33:15.766278   22586 cri.go:89] found id: "901121a197604a1c522a2d4628a282334fc05ad33f3fdcb043724310be152785"
	I1010 17:33:15.766280   22586 cri.go:89] found id: "071e94df1917ca297f7279600d3626289e55027384ba88081d3c6d75e7c1e418"
	I1010 17:33:15.766283   22586 cri.go:89] found id: "c9f157c8634805afa056c3c518497da9987b920da8bf0ac132118ed4e4ef8ea9"
	I1010 17:33:15.766293   22586 cri.go:89] found id: "6031890f647ecd1229bdaba7d90fb473ac9a8831d40666fdbc09bd914ca1987a"
	I1010 17:33:15.766296   22586 cri.go:89] found id: "7325b7e01b3666ecc0095ccec9564f71a65648e8cf9ce1d9e1915c7d1eaa574a"
	I1010 17:33:15.766298   22586 cri.go:89] found id: "19825e2ee8b346bd47414fd8e5247ef52e5b2e32f3eb7196eef394c88fd2275f"
	I1010 17:33:15.766301   22586 cri.go:89] found id: "22fc52febdf0cdaac8c2a5aad2960659c8a0115782e452ba76c5328f526e478c"
	I1010 17:33:15.766303   22586 cri.go:89] found id: "b770cbeea4ac5736e7c6c1c1e37f4cf430284066d47389ba72462e3d59a6fc36"
	I1010 17:33:15.766306   22586 cri.go:89] found id: "a6f2b6c587bcccac9182b5ea634d295f575138b032fc36b618e6fd522dd3434a"
	I1010 17:33:15.766317   22586 cri.go:89] found id: "0e80700c177774e91775e01b322ba4b7c3ad23f691a7e8ef083285a034138f33"
	I1010 17:33:15.766324   22586 cri.go:89] found id: "8cab3f92e9e88acd6ccdda17457d84c2208a2db36f61d1769c76a89b89d5c06c"
	I1010 17:33:15.766328   22586 cri.go:89] found id: "8cb0bc1946c2ed4da67ec55c8c6c99b35b9087ba2260de09913804f39b37e9aa"
	I1010 17:33:15.766331   22586 cri.go:89] found id: "a664c4cd86a07ae3da31b7161b1ffcb861502990a087baf33bb177718c331505"
	I1010 17:33:15.766333   22586 cri.go:89] found id: "4f4668380d0085c1200a82058cc3e69994ce54d202cd46003f4aeb1592745336"
	I1010 17:33:15.766338   22586 cri.go:89] found id: "c1f6da858e936ea82a29d1bd82704a1271d21a8c3ef131087c8a1ffc041f909d"
	I1010 17:33:15.766341   22586 cri.go:89] found id: "03911015ab5c009ca66dea098febf11b6b587cc8665f0cf85bf894a7d24caf04"
	I1010 17:33:15.766343   22586 cri.go:89] found id: "8643869dd690c538be7e9ae88ed91a5133d80d777f7f05864080e0071de6ce07"
	I1010 17:33:15.766345   22586 cri.go:89] found id: "426cb7351d8b7ffa8dc04159ce227020bab5b313130d3d7ea54e381c5e1ff403"
	I1010 17:33:15.766348   22586 cri.go:89] found id: ""
	I1010 17:33:15.766387   22586 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 17:33:15.780942   22586 out.go:203] 
	W1010 17:33:15.781942   22586 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:33:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:33:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1010 17:33:15.781981   22586 out.go:285] * 
	* 
	W1010 17:33:15.785355   22586 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 17:33:15.786410   22586 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-594989 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.37s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (148.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-594989 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-594989 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-594989 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [7f3b6373-bce8-4305-8683-e2d628005ee8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [7f3b6373-bce8-4305-8683-e2d628005ee8] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003207475s
I1010 17:33:16.520594    9354 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-594989 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-594989 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.370660651s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-594989 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-594989 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-594989
helpers_test.go:243: (dbg) docker inspect addons-594989:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9fa2ef611c9849fd3426794362cf847d0d6f7c23a20ae6d9800988ad87ecb135",
	        "Created": "2025-10-10T17:30:31.036645528Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 11495,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-10T17:30:31.068002446Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:84da1fc78d37190122f56c520913b0bfc454516bc5fdbdc209e2a5258afce8c3",
	        "ResolvConfPath": "/var/lib/docker/containers/9fa2ef611c9849fd3426794362cf847d0d6f7c23a20ae6d9800988ad87ecb135/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9fa2ef611c9849fd3426794362cf847d0d6f7c23a20ae6d9800988ad87ecb135/hostname",
	        "HostsPath": "/var/lib/docker/containers/9fa2ef611c9849fd3426794362cf847d0d6f7c23a20ae6d9800988ad87ecb135/hosts",
	        "LogPath": "/var/lib/docker/containers/9fa2ef611c9849fd3426794362cf847d0d6f7c23a20ae6d9800988ad87ecb135/9fa2ef611c9849fd3426794362cf847d0d6f7c23a20ae6d9800988ad87ecb135-json.log",
	        "Name": "/addons-594989",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-594989:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-594989",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9fa2ef611c9849fd3426794362cf847d0d6f7c23a20ae6d9800988ad87ecb135",
	                "LowerDir": "/var/lib/docker/overlay2/6327399dea96096ab55cbb18fc07221ce0de561a801c7e62e54cae577730c751-init/diff:/var/lib/docker/overlay2/9995a0af7efc4d83e8e62526a6cf13ffc5df3bab5cee59077c863040f7e3e58d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6327399dea96096ab55cbb18fc07221ce0de561a801c7e62e54cae577730c751/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6327399dea96096ab55cbb18fc07221ce0de561a801c7e62e54cae577730c751/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6327399dea96096ab55cbb18fc07221ce0de561a801c7e62e54cae577730c751/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-594989",
	                "Source": "/var/lib/docker/volumes/addons-594989/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-594989",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-594989",
	                "name.minikube.sigs.k8s.io": "addons-594989",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5feccdb82fa32fd4f5329a3672e4a95b15c3a398fc9ff55ab3b441482dac6882",
	            "SandboxKey": "/var/run/docker/netns/5feccdb82fa3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-594989": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:cc:14:87:ea:e1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e5afbb060625fdf00eb96e69bc6bb1936796d1d3115b5c6a81a4bfc12076dd40",
	                    "EndpointID": "037cbe2d2a9496d52ebc03cf599dcb81c89561985495e224bd952445db89c960",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-594989",
	                        "9fa2ef611c98"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-594989 -n addons-594989
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-594989 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-594989 logs -n 25: (1.164075515s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-717710 --alsologtostderr --binary-mirror http://127.0.0.1:39877 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-717710 │ jenkins │ v1.37.0 │ 10 Oct 25 17:30 UTC │                     │
	│ delete  │ -p binary-mirror-717710                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-717710 │ jenkins │ v1.37.0 │ 10 Oct 25 17:30 UTC │ 10 Oct 25 17:30 UTC │
	│ addons  │ enable dashboard -p addons-594989                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-594989        │ jenkins │ v1.37.0 │ 10 Oct 25 17:30 UTC │                     │
	│ addons  │ disable dashboard -p addons-594989                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-594989        │ jenkins │ v1.37.0 │ 10 Oct 25 17:30 UTC │                     │
	│ start   │ -p addons-594989 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-594989        │ jenkins │ v1.37.0 │ 10 Oct 25 17:30 UTC │ 10 Oct 25 17:32 UTC │
	│ addons  │ addons-594989 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-594989        │ jenkins │ v1.37.0 │ 10 Oct 25 17:32 UTC │                     │
	│ addons  │ addons-594989 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-594989        │ jenkins │ v1.37.0 │ 10 Oct 25 17:32 UTC │                     │
	│ addons  │ enable headlamp -p addons-594989 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-594989        │ jenkins │ v1.37.0 │ 10 Oct 25 17:32 UTC │                     │
	│ addons  │ addons-594989 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-594989        │ jenkins │ v1.37.0 │ 10 Oct 25 17:32 UTC │                     │
	│ addons  │ addons-594989 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-594989        │ jenkins │ v1.37.0 │ 10 Oct 25 17:32 UTC │                     │
	│ addons  │ addons-594989 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-594989        │ jenkins │ v1.37.0 │ 10 Oct 25 17:33 UTC │                     │
	│ addons  │ addons-594989 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-594989        │ jenkins │ v1.37.0 │ 10 Oct 25 17:33 UTC │                     │
	│ ip      │ addons-594989 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-594989        │ jenkins │ v1.37.0 │ 10 Oct 25 17:33 UTC │ 10 Oct 25 17:33 UTC │
	│ addons  │ addons-594989 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-594989        │ jenkins │ v1.37.0 │ 10 Oct 25 17:33 UTC │                     │
	│ ssh     │ addons-594989 ssh cat /opt/local-path-provisioner/pvc-f303975d-b3e2-4cbe-8760-b97c507c5465_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-594989        │ jenkins │ v1.37.0 │ 10 Oct 25 17:33 UTC │ 10 Oct 25 17:33 UTC │
	│ addons  │ addons-594989 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-594989        │ jenkins │ v1.37.0 │ 10 Oct 25 17:33 UTC │                     │
	│ addons  │ addons-594989 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-594989        │ jenkins │ v1.37.0 │ 10 Oct 25 17:33 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-594989                                                                                                                                                                                                                                                                                                                                                                                           │ addons-594989        │ jenkins │ v1.37.0 │ 10 Oct 25 17:33 UTC │ 10 Oct 25 17:33 UTC │
	│ addons  │ addons-594989 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-594989        │ jenkins │ v1.37.0 │ 10 Oct 25 17:33 UTC │                     │
	│ ssh     │ addons-594989 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-594989        │ jenkins │ v1.37.0 │ 10 Oct 25 17:33 UTC │                     │
	│ addons  │ addons-594989 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-594989        │ jenkins │ v1.37.0 │ 10 Oct 25 17:33 UTC │                     │
	│ addons  │ addons-594989 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-594989        │ jenkins │ v1.37.0 │ 10 Oct 25 17:33 UTC │                     │
	│ addons  │ addons-594989 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-594989        │ jenkins │ v1.37.0 │ 10 Oct 25 17:33 UTC │                     │
	│ addons  │ addons-594989 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-594989        │ jenkins │ v1.37.0 │ 10 Oct 25 17:33 UTC │                     │
	│ ip      │ addons-594989 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-594989        │ jenkins │ v1.37.0 │ 10 Oct 25 17:35 UTC │ 10 Oct 25 17:35 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/10 17:30:06
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 17:30:06.361014   10838 out.go:360] Setting OutFile to fd 1 ...
	I1010 17:30:06.361342   10838 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:30:06.361352   10838 out.go:374] Setting ErrFile to fd 2...
	I1010 17:30:06.361355   10838 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:30:06.361576   10838 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 17:30:06.362173   10838 out.go:368] Setting JSON to false
	I1010 17:30:06.363030   10838 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":746,"bootTime":1760116660,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 17:30:06.363137   10838 start.go:141] virtualization: kvm guest
	I1010 17:30:06.365117   10838 out.go:179] * [addons-594989] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1010 17:30:06.366254   10838 notify.go:220] Checking for updates...
	I1010 17:30:06.366294   10838 out.go:179]   - MINIKUBE_LOCATION=21724
	I1010 17:30:06.367395   10838 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 17:30:06.368875   10838 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 17:30:06.369995   10838 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-5815/.minikube
	I1010 17:30:06.371029   10838 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 17:30:06.372047   10838 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 17:30:06.373208   10838 driver.go:421] Setting default libvirt URI to qemu:///system
	I1010 17:30:06.396002   10838 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1010 17:30:06.396126   10838 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 17:30:06.454959   10838 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-10 17:30:06.44499643 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 17:30:06.455093   10838 docker.go:318] overlay module found
	I1010 17:30:06.456955   10838 out.go:179] * Using the docker driver based on user configuration
	I1010 17:30:06.458129   10838 start.go:305] selected driver: docker
	I1010 17:30:06.458144   10838 start.go:925] validating driver "docker" against <nil>
	I1010 17:30:06.458155   10838 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 17:30:06.458713   10838 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 17:30:06.514418   10838 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-10 17:30:06.50512395 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 17:30:06.514567   10838 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1010 17:30:06.514772   10838 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 17:30:06.516601   10838 out.go:179] * Using Docker driver with root privileges
	I1010 17:30:06.517737   10838 cni.go:84] Creating CNI manager for ""
	I1010 17:30:06.517795   10838 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 17:30:06.517808   10838 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1010 17:30:06.517871   10838 start.go:349] cluster config:
	{Name:addons-594989 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-594989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1010 17:30:06.519220   10838 out.go:179] * Starting "addons-594989" primary control-plane node in "addons-594989" cluster
	I1010 17:30:06.520323   10838 cache.go:123] Beginning downloading kic base image for docker with crio
	I1010 17:30:06.521435   10838 out.go:179] * Pulling base image v0.0.48-1760103811-21724 ...
	I1010 17:30:06.522464   10838 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 17:30:06.522514   10838 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1010 17:30:06.522523   10838 cache.go:58] Caching tarball of preloaded images
	I1010 17:30:06.522579   10838 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon
	I1010 17:30:06.522609   10838 preload.go:233] Found /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 17:30:06.522617   10838 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1010 17:30:06.522976   10838 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/config.json ...
	I1010 17:30:06.522998   10838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/config.json: {Name:mk3dbc5a9832b9046e3fa50e98f8fc65a9bc2515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:30:06.540080   10838 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 to local cache
	I1010 17:30:06.540206   10838 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local cache directory
	I1010 17:30:06.540224   10838 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local cache directory, skipping pull
	I1010 17:30:06.540230   10838 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 exists in cache, skipping pull
	I1010 17:30:06.540237   10838 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 as a tarball
	I1010 17:30:06.540243   10838 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 from local cache
	I1010 17:30:21.903904   10838 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 from cached tarball
	I1010 17:30:21.903939   10838 cache.go:232] Successfully downloaded all kic artifacts
	I1010 17:30:21.903979   10838 start.go:360] acquireMachinesLock for addons-594989: {Name:mk3be95cc494884c6edea2e4e0b6f8ab4aa5f686 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 17:30:21.904098   10838 start.go:364] duration metric: took 98.011µs to acquireMachinesLock for "addons-594989"
	I1010 17:30:21.904123   10838 start.go:93] Provisioning new machine with config: &{Name:addons-594989 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-594989 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 17:30:21.904185   10838 start.go:125] createHost starting for "" (driver="docker")
	I1010 17:30:21.905621   10838 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1010 17:30:21.905804   10838 start.go:159] libmachine.API.Create for "addons-594989" (driver="docker")
	I1010 17:30:21.905829   10838 client.go:168] LocalClient.Create starting
	I1010 17:30:21.905927   10838 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem
	I1010 17:30:22.098456   10838 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem
	I1010 17:30:22.664950   10838 cli_runner.go:164] Run: docker network inspect addons-594989 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1010 17:30:22.681400   10838 cli_runner.go:211] docker network inspect addons-594989 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1010 17:30:22.681490   10838 network_create.go:284] running [docker network inspect addons-594989] to gather additional debugging logs...
	I1010 17:30:22.681513   10838 cli_runner.go:164] Run: docker network inspect addons-594989
	W1010 17:30:22.696705   10838 cli_runner.go:211] docker network inspect addons-594989 returned with exit code 1
	I1010 17:30:22.696749   10838 network_create.go:287] error running [docker network inspect addons-594989]: docker network inspect addons-594989: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-594989 not found
	I1010 17:30:22.696769   10838 network_create.go:289] output of [docker network inspect addons-594989]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-594989 not found
	
	** /stderr **
	I1010 17:30:22.696888   10838 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 17:30:22.713531   10838 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d78860}
	I1010 17:30:22.713579   10838 network_create.go:124] attempt to create docker network addons-594989 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1010 17:30:22.713625   10838 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-594989 addons-594989
	I1010 17:30:22.767588   10838 network_create.go:108] docker network addons-594989 192.168.49.0/24 created
	I1010 17:30:22.767616   10838 kic.go:121] calculated static IP "192.168.49.2" for the "addons-594989" container
	I1010 17:30:22.767671   10838 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1010 17:30:22.782992   10838 cli_runner.go:164] Run: docker volume create addons-594989 --label name.minikube.sigs.k8s.io=addons-594989 --label created_by.minikube.sigs.k8s.io=true
	I1010 17:30:22.799155   10838 oci.go:103] Successfully created a docker volume addons-594989
	I1010 17:30:22.799221   10838 cli_runner.go:164] Run: docker run --rm --name addons-594989-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-594989 --entrypoint /usr/bin/test -v addons-594989:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 -d /var/lib
	I1010 17:30:26.636008   10838 cli_runner.go:217] Completed: docker run --rm --name addons-594989-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-594989 --entrypoint /usr/bin/test -v addons-594989:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 -d /var/lib: (3.836748235s)
	I1010 17:30:26.636034   10838 oci.go:107] Successfully prepared a docker volume addons-594989
	I1010 17:30:26.636045   10838 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 17:30:26.636109   10838 kic.go:194] Starting extracting preloaded images to volume ...
	I1010 17:30:26.636155   10838 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-594989:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1010 17:30:30.967731   10838 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-594989:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.331540703s)
	I1010 17:30:30.967757   10838 kic.go:203] duration metric: took 4.331646769s to extract preloaded images to volume ...
	W1010 17:30:30.967837   10838 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1010 17:30:30.967868   10838 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1010 17:30:30.967903   10838 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1010 17:30:31.021751   10838 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-594989 --name addons-594989 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-594989 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-594989 --network addons-594989 --ip 192.168.49.2 --volume addons-594989:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6
	I1010 17:30:31.302357   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Running}}
	I1010 17:30:31.320699   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:31.338440   10838 cli_runner.go:164] Run: docker exec addons-594989 stat /var/lib/dpkg/alternatives/iptables
	I1010 17:30:31.386035   10838 oci.go:144] the created container "addons-594989" has a running status.
	I1010 17:30:31.386096   10838 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa...
	I1010 17:30:31.880950   10838 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1010 17:30:31.905495   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:31.924177   10838 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1010 17:30:31.924201   10838 kic_runner.go:114] Args: [docker exec --privileged addons-594989 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1010 17:30:31.961364   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:31.978848   10838 machine.go:93] provisionDockerMachine start ...
	I1010 17:30:31.978930   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:31.996247   10838 main.go:141] libmachine: Using SSH client type: native
	I1010 17:30:31.996467   10838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1010 17:30:31.996478   10838 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 17:30:32.126861   10838 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-594989
	
	I1010 17:30:32.126888   10838 ubuntu.go:182] provisioning hostname "addons-594989"
	I1010 17:30:32.126950   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:32.143409   10838 main.go:141] libmachine: Using SSH client type: native
	I1010 17:30:32.143599   10838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1010 17:30:32.143613   10838 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-594989 && echo "addons-594989" | sudo tee /etc/hostname
	I1010 17:30:32.283957   10838 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-594989
	
	I1010 17:30:32.284037   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:32.301087   10838 main.go:141] libmachine: Using SSH client type: native
	I1010 17:30:32.301289   10838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1010 17:30:32.301305   10838 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-594989' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-594989/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-594989' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 17:30:32.431567   10838 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 17:30:32.431599   10838 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-5815/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-5815/.minikube}
	I1010 17:30:32.431633   10838 ubuntu.go:190] setting up certificates
	I1010 17:30:32.431645   10838 provision.go:84] configureAuth start
	I1010 17:30:32.431707   10838 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-594989
	I1010 17:30:32.449703   10838 provision.go:143] copyHostCerts
	I1010 17:30:32.449782   10838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem (1675 bytes)
	I1010 17:30:32.449918   10838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem (1082 bytes)
	I1010 17:30:32.450011   10838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem (1123 bytes)
	I1010 17:30:32.450107   10838 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem org=jenkins.addons-594989 san=[127.0.0.1 192.168.49.2 addons-594989 localhost minikube]
	I1010 17:30:32.585040   10838 provision.go:177] copyRemoteCerts
	I1010 17:30:32.585111   10838 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 17:30:32.585142   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:32.601612   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:32.697574   10838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1010 17:30:32.717499   10838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1010 17:30:32.736498   10838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 17:30:32.755394   10838 provision.go:87] duration metric: took 323.735274ms to configureAuth
	I1010 17:30:32.755414   10838 ubuntu.go:206] setting minikube options for container-runtime
	I1010 17:30:32.755575   10838 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:30:32.755662   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:32.772719   10838 main.go:141] libmachine: Using SSH client type: native
	I1010 17:30:32.772922   10838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1010 17:30:32.772939   10838 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 17:30:33.043359   10838 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 17:30:33.043378   10838 machine.go:96] duration metric: took 1.064511043s to provisionDockerMachine
	I1010 17:30:33.043388   10838 client.go:171] duration metric: took 11.137551987s to LocalClient.Create
	I1010 17:30:33.043404   10838 start.go:167] duration metric: took 11.137598801s to libmachine.API.Create "addons-594989"
	I1010 17:30:33.043413   10838 start.go:293] postStartSetup for "addons-594989" (driver="docker")
	I1010 17:30:33.043425   10838 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 17:30:33.043479   10838 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 17:30:33.043532   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:33.061237   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:33.158323   10838 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 17:30:33.161594   10838 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1010 17:30:33.161619   10838 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1010 17:30:33.161629   10838 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/addons for local assets ...
	I1010 17:30:33.161684   10838 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/files for local assets ...
	I1010 17:30:33.161707   10838 start.go:296] duration metric: took 118.287948ms for postStartSetup
	I1010 17:30:33.161960   10838 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-594989
	I1010 17:30:33.178986   10838 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/config.json ...
	I1010 17:30:33.179282   10838 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1010 17:30:33.179330   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:33.197090   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:33.288847   10838 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1010 17:30:33.292976   10838 start.go:128] duration metric: took 11.388779655s to createHost
	I1010 17:30:33.292994   10838 start.go:83] releasing machines lock for "addons-594989", held for 11.38888394s
	I1010 17:30:33.293065   10838 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-594989
	I1010 17:30:33.310630   10838 ssh_runner.go:195] Run: cat /version.json
	I1010 17:30:33.310681   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:33.310713   10838 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 17:30:33.310765   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:33.328441   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:33.330683   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:33.420133   10838 ssh_runner.go:195] Run: systemctl --version
	I1010 17:30:33.475613   10838 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 17:30:33.512267   10838 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 17:30:33.516907   10838 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 17:30:33.516975   10838 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 17:30:33.545910   10838 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 17:30:33.545934   10838 start.go:495] detecting cgroup driver to use...
	I1010 17:30:33.545968   10838 detect.go:190] detected "systemd" cgroup driver on host os
	I1010 17:30:33.546028   10838 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 17:30:33.563138   10838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 17:30:33.576306   10838 docker.go:218] disabling cri-docker service (if available) ...
	I1010 17:30:33.576364   10838 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 17:30:33.593319   10838 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 17:30:33.611251   10838 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 17:30:33.691837   10838 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 17:30:33.781458   10838 docker.go:234] disabling docker service ...
	I1010 17:30:33.781544   10838 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 17:30:33.800098   10838 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 17:30:33.813380   10838 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 17:30:33.899213   10838 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 17:30:33.981359   10838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 17:30:33.994260   10838 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 17:30:34.009329   10838 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1010 17:30:34.009377   10838 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 17:30:34.020133   10838 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1010 17:30:34.020214   10838 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 17:30:34.029853   10838 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 17:30:34.039331   10838 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 17:30:34.048759   10838 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 17:30:34.057508   10838 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 17:30:34.067102   10838 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 17:30:34.082131   10838 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 17:30:34.091405   10838 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 17:30:34.099616   10838 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 17:30:34.099673   10838 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 17:30:34.111830   10838 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 17:30:34.120353   10838 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 17:30:34.201747   10838 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 17:30:34.337513   10838 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 17:30:34.337584   10838 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 17:30:34.341465   10838 start.go:563] Will wait 60s for crictl version
	I1010 17:30:34.341512   10838 ssh_runner.go:195] Run: which crictl
	I1010 17:30:34.344934   10838 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1010 17:30:34.370191   10838 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1010 17:30:34.370335   10838 ssh_runner.go:195] Run: crio --version
	I1010 17:30:34.397163   10838 ssh_runner.go:195] Run: crio --version
	I1010 17:30:34.425005   10838 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1010 17:30:34.426025   10838 cli_runner.go:164] Run: docker network inspect addons-594989 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 17:30:34.443404   10838 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1010 17:30:34.447406   10838 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 17:30:34.457961   10838 kubeadm.go:883] updating cluster {Name:addons-594989 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-594989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 17:30:34.458104   10838 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 17:30:34.458172   10838 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 17:30:34.489367   10838 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 17:30:34.489388   10838 crio.go:433] Images already preloaded, skipping extraction
	I1010 17:30:34.489444   10838 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 17:30:34.515632   10838 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 17:30:34.515653   10838 cache_images.go:85] Images are preloaded, skipping loading
	I1010 17:30:34.515659   10838 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1010 17:30:34.515744   10838 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-594989 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-594989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 17:30:34.515818   10838 ssh_runner.go:195] Run: crio config
	I1010 17:30:34.560118   10838 cni.go:84] Creating CNI manager for ""
	I1010 17:30:34.560140   10838 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 17:30:34.560168   10838 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1010 17:30:34.560196   10838 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-594989 NodeName:addons-594989 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 17:30:34.560342   10838 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-594989"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 17:30:34.560405   10838 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1010 17:30:34.569025   10838 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 17:30:34.569097   10838 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 17:30:34.577263   10838 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1010 17:30:34.590731   10838 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 17:30:34.607183   10838 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1010 17:30:34.620714   10838 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1010 17:30:34.624223   10838 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 17:30:34.634393   10838 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 17:30:34.714616   10838 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 17:30:34.739452   10838 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989 for IP: 192.168.49.2
	I1010 17:30:34.739470   10838 certs.go:195] generating shared ca certs ...
	I1010 17:30:34.739484   10838 certs.go:227] acquiring lock for ca certs: {Name:mkd2ebf34e0d6ec3a7809bed8325fdc7fe2fcc31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:30:34.739609   10838 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key
	I1010 17:30:35.139130   10838 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt ...
	I1010 17:30:35.139158   10838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt: {Name:mk650aa7f4ff32ad966d5e8b39e5e2b32aca7c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:30:35.139352   10838 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key ...
	I1010 17:30:35.139367   10838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key: {Name:mkdb6a8b6dbc479523f0cc85aae637cf977fc8fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:30:35.139474   10838 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key
	I1010 17:30:35.389507   10838 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.crt ...
	I1010 17:30:35.389535   10838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.crt: {Name:mkde245fd3fbe3a5dace53fe07e5b3036cbfe44d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:30:35.389721   10838 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key ...
	I1010 17:30:35.389735   10838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key: {Name:mkfd3baeb73564eb3c648c6dd88a16a028f3b4b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:30:35.389841   10838 certs.go:257] generating profile certs ...
	I1010 17:30:35.389904   10838 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.key
	I1010 17:30:35.389924   10838 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.crt with IP's: []
	I1010 17:30:35.739400   10838 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.crt ...
	I1010 17:30:35.739430   10838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.crt: {Name:mka71586df5688b96c522c53e41e713d2b473b25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:30:35.739635   10838 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.key ...
	I1010 17:30:35.739649   10838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.key: {Name:mka2d321442c6650503dfec5163f4835012b868d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:30:35.739755   10838 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/apiserver.key.b6f25ad6
	I1010 17:30:35.739776   10838 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/apiserver.crt.b6f25ad6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1010 17:30:35.993166   10838 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/apiserver.crt.b6f25ad6 ...
	I1010 17:30:35.993193   10838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/apiserver.crt.b6f25ad6: {Name:mk71a098e94d4700b6684874f0d747b97e6a32bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:30:35.993386   10838 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/apiserver.key.b6f25ad6 ...
	I1010 17:30:35.993405   10838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/apiserver.key.b6f25ad6: {Name:mk36505aa863981e7d7fa7fe93adca4604c45146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:30:35.993516   10838 certs.go:382] copying /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/apiserver.crt.b6f25ad6 -> /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/apiserver.crt
	I1010 17:30:35.993610   10838 certs.go:386] copying /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/apiserver.key.b6f25ad6 -> /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/apiserver.key
	I1010 17:30:35.993665   10838 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/proxy-client.key
	I1010 17:30:35.993684   10838 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/proxy-client.crt with IP's: []
	I1010 17:30:36.455456   10838 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/proxy-client.crt ...
	I1010 17:30:36.455489   10838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/proxy-client.crt: {Name:mk617bd2c82bf6e1ed8206255f493cfc594258af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:30:36.455667   10838 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/proxy-client.key ...
	I1010 17:30:36.455678   10838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/proxy-client.key: {Name:mkc40a9546349e2ce8cf3e7efa3c131c37c4b0e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:30:36.455839   10838 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem (1675 bytes)
	I1010 17:30:36.455873   10838 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem (1082 bytes)
	I1010 17:30:36.455894   10838 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem (1123 bytes)
	I1010 17:30:36.455913   10838 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem (1675 bytes)
	I1010 17:30:36.456511   10838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 17:30:36.476751   10838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 17:30:36.496310   10838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 17:30:36.514988   10838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1010 17:30:36.533841   10838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1010 17:30:36.552425   10838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 17:30:36.570981   10838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 17:30:36.589519   10838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1010 17:30:36.607848   10838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 17:30:36.627932   10838 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 17:30:36.641761   10838 ssh_runner.go:195] Run: openssl version
	I1010 17:30:36.647705   10838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 17:30:36.658884   10838 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 17:30:36.662498   10838 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I1010 17:30:36.662553   10838 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 17:30:36.695938   10838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 17:30:36.705388   10838 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 17:30:36.709009   10838 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1010 17:30:36.709073   10838 kubeadm.go:400] StartCluster: {Name:addons-594989 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-594989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 17:30:36.709159   10838 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 17:30:36.709230   10838 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 17:30:36.736008   10838 cri.go:89] found id: ""
	I1010 17:30:36.736086   10838 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 17:30:36.744667   10838 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 17:30:36.753014   10838 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1010 17:30:36.753095   10838 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 17:30:36.761163   10838 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 17:30:36.761183   10838 kubeadm.go:157] found existing configuration files:
	
	I1010 17:30:36.761220   10838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 17:30:36.769267   10838 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 17:30:36.769315   10838 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 17:30:36.777999   10838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 17:30:36.786347   10838 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 17:30:36.786403   10838 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 17:30:36.794664   10838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 17:30:36.802906   10838 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 17:30:36.802960   10838 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 17:30:36.810638   10838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 17:30:36.818492   10838 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 17:30:36.818538   10838 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 17:30:36.826261   10838 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1010 17:30:36.862682   10838 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1010 17:30:36.862742   10838 kubeadm.go:318] [preflight] Running pre-flight checks
	I1010 17:30:36.882354   10838 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1010 17:30:36.882414   10838 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1010 17:30:36.882442   10838 kubeadm.go:318] OS: Linux
	I1010 17:30:36.882501   10838 kubeadm.go:318] CGROUPS_CPU: enabled
	I1010 17:30:36.882579   10838 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1010 17:30:36.882667   10838 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1010 17:30:36.882743   10838 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1010 17:30:36.882814   10838 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1010 17:30:36.882881   10838 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1010 17:30:36.882950   10838 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1010 17:30:36.883029   10838 kubeadm.go:318] CGROUPS_IO: enabled
	I1010 17:30:36.934833   10838 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 17:30:36.934959   10838 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 17:30:36.935109   10838 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1010 17:30:36.941920   10838 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 17:30:36.943895   10838 out.go:252]   - Generating certificates and keys ...
	I1010 17:30:36.943990   10838 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1010 17:30:36.944115   10838 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1010 17:30:37.237708   10838 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1010 17:30:37.571576   10838 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1010 17:30:37.872370   10838 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1010 17:30:38.236368   10838 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1010 17:30:38.285619   10838 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1010 17:30:38.285805   10838 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-594989 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1010 17:30:38.358044   10838 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1010 17:30:38.358248   10838 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-594989 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1010 17:30:38.795818   10838 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1010 17:30:39.043953   10838 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1010 17:30:39.269139   10838 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1010 17:30:39.269249   10838 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 17:30:39.585961   10838 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 17:30:39.816850   10838 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1010 17:30:39.942130   10838 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 17:30:40.382870   10838 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 17:30:40.449962   10838 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 17:30:40.451000   10838 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 17:30:40.455038   10838 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 17:30:40.456513   10838 out.go:252]   - Booting up control plane ...
	I1010 17:30:40.456597   10838 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 17:30:40.456681   10838 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 17:30:40.457249   10838 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 17:30:40.470937   10838 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 17:30:40.471115   10838 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1010 17:30:40.477516   10838 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1010 17:30:40.477816   10838 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 17:30:40.477871   10838 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1010 17:30:40.573934   10838 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 17:30:40.574138   10838 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 17:30:41.075746   10838 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.949615ms
	I1010 17:30:41.079730   10838 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1010 17:30:41.079839   10838 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1010 17:30:41.079949   10838 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1010 17:30:41.080087   10838 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1010 17:30:42.214908   10838 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.135122439s
	I1010 17:30:44.000622   10838 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.920896089s
	I1010 17:30:45.581431   10838 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.501641891s
	I1010 17:30:45.591384   10838 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 17:30:45.599794   10838 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 17:30:45.607635   10838 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 17:30:45.607939   10838 kubeadm.go:318] [mark-control-plane] Marking the node addons-594989 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 17:30:45.614517   10838 kubeadm.go:318] [bootstrap-token] Using token: g8m8ob.vbiavqs0zz8j6p83
	I1010 17:30:45.615663   10838 out.go:252]   - Configuring RBAC rules ...
	I1010 17:30:45.615821   10838 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 17:30:45.618526   10838 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 17:30:45.622660   10838 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 17:30:45.624718   10838 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 17:30:45.627306   10838 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 17:30:45.629262   10838 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 17:30:45.986784   10838 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 17:30:46.399265   10838 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1010 17:30:46.985935   10838 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1010 17:30:46.986793   10838 kubeadm.go:318] 
	I1010 17:30:46.986900   10838 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1010 17:30:46.986918   10838 kubeadm.go:318] 
	I1010 17:30:46.986996   10838 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1010 17:30:46.987005   10838 kubeadm.go:318] 
	I1010 17:30:46.987040   10838 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1010 17:30:46.987136   10838 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 17:30:46.987220   10838 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 17:30:46.987238   10838 kubeadm.go:318] 
	I1010 17:30:46.987363   10838 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1010 17:30:46.987380   10838 kubeadm.go:318] 
	I1010 17:30:46.987450   10838 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 17:30:46.987457   10838 kubeadm.go:318] 
	I1010 17:30:46.987530   10838 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1010 17:30:46.987630   10838 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 17:30:46.987725   10838 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 17:30:46.987734   10838 kubeadm.go:318] 
	I1010 17:30:46.987838   10838 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 17:30:46.987938   10838 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1010 17:30:46.987948   10838 kubeadm.go:318] 
	I1010 17:30:46.988081   10838 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token g8m8ob.vbiavqs0zz8j6p83 \
	I1010 17:30:46.988231   10838 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:08dcb68c3233bd2646103f50182dc3a0cc6156f6b69cb66c341f613324bcc71f \
	I1010 17:30:46.988271   10838 kubeadm.go:318] 	--control-plane 
	I1010 17:30:46.988280   10838 kubeadm.go:318] 
	I1010 17:30:46.988388   10838 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1010 17:30:46.988395   10838 kubeadm.go:318] 
	I1010 17:30:46.988505   10838 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token g8m8ob.vbiavqs0zz8j6p83 \
	I1010 17:30:46.988632   10838 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:08dcb68c3233bd2646103f50182dc3a0cc6156f6b69cb66c341f613324bcc71f 
	I1010 17:30:46.989919   10838 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1010 17:30:46.990089   10838 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 17:30:46.990123   10838 cni.go:84] Creating CNI manager for ""
	I1010 17:30:46.990141   10838 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 17:30:46.992445   10838 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1010 17:30:46.993568   10838 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1010 17:30:46.997677   10838 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1010 17:30:46.997689   10838 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1010 17:30:47.011602   10838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1010 17:30:47.207486   10838 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 17:30:47.207566   10838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:30:47.207584   10838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-594989 minikube.k8s.io/updated_at=2025_10_10T17_30_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ad692bf4ab89f0e135b80e730ae25010479ecc46 minikube.k8s.io/name=addons-594989 minikube.k8s.io/primary=true
	I1010 17:30:47.217041   10838 ops.go:34] apiserver oom_adj: -16
	I1010 17:30:47.279542   10838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:30:47.780184   10838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:30:48.279973   10838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:30:48.780138   10838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:30:49.280342   10838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:30:49.779793   10838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:30:50.280472   10838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:30:50.779777   10838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:30:51.280149   10838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:30:51.780325   10838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:30:52.279623   10838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:30:52.339889   10838 kubeadm.go:1113] duration metric: took 5.132383023s to wait for elevateKubeSystemPrivileges
	I1010 17:30:52.339928   10838 kubeadm.go:402] duration metric: took 15.630857792s to StartCluster
	I1010 17:30:52.339951   10838 settings.go:142] acquiring lock: {Name:mk32701f7c6313a55b8740f0862889585a36e8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:30:52.340081   10838 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 17:30:52.340471   10838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:30:52.340642   10838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1010 17:30:52.340654   10838 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 17:30:52.340730   10838 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1010 17:30:52.340849   10838 addons.go:69] Setting yakd=true in profile "addons-594989"
	I1010 17:30:52.340873   10838 addons.go:238] Setting addon yakd=true in "addons-594989"
	I1010 17:30:52.340873   10838 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-594989"
	I1010 17:30:52.340883   10838 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-594989"
	I1010 17:30:52.340889   10838 addons.go:69] Setting ingress=true in profile "addons-594989"
	I1010 17:30:52.340902   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.340906   10838 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-594989"
	I1010 17:30:52.340910   10838 addons.go:69] Setting registry=true in profile "addons-594989"
	I1010 17:30:52.340914   10838 addons.go:238] Setting addon ingress=true in "addons-594989"
	I1010 17:30:52.340912   10838 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:30:52.340923   10838 addons.go:238] Setting addon registry=true in "addons-594989"
	I1010 17:30:52.340940   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.340941   10838 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-594989"
	I1010 17:30:52.340950   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.340958   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.340972   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.340981   10838 addons.go:69] Setting default-storageclass=true in profile "addons-594989"
	I1010 17:30:52.340999   10838 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-594989"
	I1010 17:30:52.341028   10838 addons.go:69] Setting registry-creds=true in profile "addons-594989"
	I1010 17:30:52.341067   10838 addons.go:238] Setting addon registry-creds=true in "addons-594989"
	I1010 17:30:52.341090   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.341260   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.341418   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.341435   10838 addons.go:69] Setting volcano=true in profile "addons-594989"
	I1010 17:30:52.341446   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.341450   10838 addons.go:69] Setting gcp-auth=true in profile "addons-594989"
	I1010 17:30:52.341454   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.341457   10838 addons.go:69] Setting inspektor-gadget=true in profile "addons-594989"
	I1010 17:30:52.341461   10838 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-594989"
	I1010 17:30:52.341469   10838 mustload.go:65] Loading cluster: addons-594989
	I1010 17:30:52.341473   10838 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-594989"
	I1010 17:30:52.341478   10838 addons.go:69] Setting metrics-server=true in profile "addons-594989"
	I1010 17:30:52.341490   10838 addons.go:238] Setting addon metrics-server=true in "addons-594989"
	I1010 17:30:52.341502   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.341509   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.341617   10838 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:30:52.341725   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.341881   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.341902   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.341437   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.342840   10838 addons.go:69] Setting cloud-spanner=true in profile "addons-594989"
	I1010 17:30:52.342853   10838 addons.go:238] Setting addon cloud-spanner=true in "addons-594989"
	I1010 17:30:52.342877   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.342923   10838 addons.go:69] Setting volumesnapshots=true in profile "addons-594989"
	I1010 17:30:52.342940   10838 addons.go:238] Setting addon volumesnapshots=true in "addons-594989"
	I1010 17:30:52.342963   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.341470   10838 addons.go:238] Setting addon inspektor-gadget=true in "addons-594989"
	I1010 17:30:52.343011   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.343325   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.343430   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.343439   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.341448   10838 addons.go:69] Setting ingress-dns=true in profile "addons-594989"
	I1010 17:30:52.343844   10838 addons.go:238] Setting addon ingress-dns=true in "addons-594989"
	I1010 17:30:52.343877   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.344336   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.340889   10838 addons.go:69] Setting storage-provisioner=true in profile "addons-594989"
	I1010 17:30:52.344558   10838 addons.go:238] Setting addon storage-provisioner=true in "addons-594989"
	I1010 17:30:52.344584   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.341451   10838 addons.go:238] Setting addon volcano=true in "addons-594989"
	I1010 17:30:52.344642   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.342827   10838 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-594989"
	I1010 17:30:52.344787   10838 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-594989"
	I1010 17:30:52.344821   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.341437   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.345116   10838 out.go:179] * Verifying Kubernetes components...
	I1010 17:30:52.348075   10838 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 17:30:52.354579   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.355386   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.355856   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.386390   10838 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1010 17:30:52.388793   10838 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1010 17:30:52.388820   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1010 17:30:52.388881   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:52.402639   10838 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-594989"
	I1010 17:30:52.402671   10838 addons.go:238] Setting addon default-storageclass=true in "addons-594989"
	I1010 17:30:52.402686   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.402702   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.403177   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.403212   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.404011   10838 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1010 17:30:52.405150   10838 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1010 17:30:52.405172   10838 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1010 17:30:52.407545   10838 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1010 17:30:52.408259   10838 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1010 17:30:52.409724   10838 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1010 17:30:52.409774   10838 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1010 17:30:52.409835   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:52.410863   10838 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1010 17:30:52.410899   10838 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1010 17:30:52.410913   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1010 17:30:52.410959   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:52.415092   10838 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1010 17:30:52.418938   10838 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1010 17:30:52.426025   10838 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1010 17:30:52.426168   10838 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1010 17:30:52.426450   10838 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1010 17:30:52.427734   10838 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1010 17:30:52.427752   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1010 17:30:52.427811   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:52.429111   10838 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1010 17:30:52.430103   10838 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1010 17:30:52.430124   10838 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1010 17:30:52.430182   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:52.430242   10838 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1010 17:30:52.431424   10838 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1010 17:30:52.432592   10838 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1010 17:30:52.432610   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1010 17:30:52.432663   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:52.432922   10838 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1010 17:30:52.434175   10838 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1010 17:30:52.434186   10838 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1010 17:30:52.434191   10838 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1010 17:30:52.434304   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:52.436276   10838 out.go:179]   - Using image docker.io/registry:3.0.0
	I1010 17:30:52.437282   10838 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1010 17:30:52.437301   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1010 17:30:52.437351   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:52.439138   10838 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1010 17:30:52.440265   10838 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1010 17:30:52.440282   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1010 17:30:52.440334   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:52.444754   10838 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1010 17:30:52.446865   10838 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1010 17:30:52.446888   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1010 17:30:52.446949   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:52.447791   10838 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1010 17:30:52.449312   10838 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1010 17:30:52.449331   10838 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1010 17:30:52.449415   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:52.454974   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.461979   10838 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 17:30:52.462005   10838 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 17:30:52.462072   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:52.469549   10838 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1010 17:30:52.470720   10838 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1010 17:30:52.470748   10838 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1010 17:30:52.470813   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	W1010 17:30:52.471286   10838 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1010 17:30:52.479195   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:52.479659   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:52.480084   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:52.480177   10838 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 17:30:52.481651   10838 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 17:30:52.481669   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 17:30:52.482253   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:52.501847   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:52.502437   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:52.504015   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:52.514495   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:52.515374   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:52.516707   10838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1010 17:30:52.517755   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:52.520122   10838 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1010 17:30:52.522119   10838 out.go:179]   - Using image docker.io/busybox:stable
	I1010 17:30:52.523412   10838 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1010 17:30:52.523759   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1010 17:30:52.523824   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:52.523662   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:52.523480   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:52.535080   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:52.545451   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:52.558089   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:52.560619   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:52.578490   10838 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 17:30:52.687687   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1010 17:30:52.689477   10838 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1010 17:30:52.689500   10838 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1010 17:30:52.692402   10838 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:30:52.692423   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1010 17:30:52.694113   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1010 17:30:52.701841   10838 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1010 17:30:52.701863   10838 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1010 17:30:52.706979   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1010 17:30:52.710882   10838 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1010 17:30:52.710900   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1010 17:30:52.711214   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1010 17:30:52.720246   10838 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1010 17:30:52.720270   10838 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1010 17:30:52.728725   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1010 17:30:52.732738   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1010 17:30:52.738398   10838 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1010 17:30:52.738482   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1010 17:30:52.740877   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1010 17:30:52.742855   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:30:52.744088   10838 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1010 17:30:52.744109   10838 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1010 17:30:52.748684   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 17:30:52.752580   10838 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1010 17:30:52.752608   10838 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1010 17:30:52.767320   10838 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1010 17:30:52.767344   10838 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1010 17:30:52.783095   10838 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1010 17:30:52.783125   10838 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1010 17:30:52.793840   10838 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1010 17:30:52.793870   10838 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1010 17:30:52.797623   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1010 17:30:52.801359   10838 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 17:30:52.801382   10838 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1010 17:30:52.801555   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 17:30:52.820711   10838 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1010 17:30:52.820751   10838 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1010 17:30:52.838956   10838 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1010 17:30:52.838982   10838 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1010 17:30:52.865707   10838 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1010 17:30:52.865734   10838 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1010 17:30:52.873664   10838 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1010 17:30:52.873759   10838 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1010 17:30:52.884425   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 17:30:52.897170   10838 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1010 17:30:52.897197   10838 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1010 17:30:52.938677   10838 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1010 17:30:52.938800   10838 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1010 17:30:52.944803   10838 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1010 17:30:52.944870   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1010 17:30:52.983822   10838 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1010 17:30:52.983846   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1010 17:30:53.000443   10838 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1010 17:30:53.001430   10838 node_ready.go:35] waiting up to 6m0s for node "addons-594989" to be "Ready" ...
	I1010 17:30:53.001741   10838 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1010 17:30:53.001755   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1010 17:30:53.022578   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1010 17:30:53.042897   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1010 17:30:53.088766   10838 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1010 17:30:53.088863   10838 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1010 17:30:53.162893   10838 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1010 17:30:53.162970   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1010 17:30:53.202950   10838 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1010 17:30:53.202972   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1010 17:30:53.232453   10838 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1010 17:30:53.232477   10838 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1010 17:30:53.318640   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1010 17:30:53.516379   10838 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-594989" context rescaled to 1 replicas
	I1010 17:30:53.854521   10838 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.147504363s)
	I1010 17:30:53.854573   10838 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.143295661s)
	I1010 17:30:53.854581   10838 addons.go:479] Verifying addon ingress=true in "addons-594989"
	I1010 17:30:53.854605   10838 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.125851669s)
	I1010 17:30:53.854664   10838 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.121904158s)
	I1010 17:30:53.854740   10838 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.113780873s)
	I1010 17:30:53.854909   10838 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.112035171s)
	W1010 17:30:53.854965   10838 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:30:53.854991   10838 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.106279991s)
	I1010 17:30:53.854992   10838 retry.go:31] will retry after 366.515538ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:30:53.855029   10838 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.053445056s)
	I1010 17:30:53.855125   10838 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.057465491s)
	I1010 17:30:53.855151   10838 addons.go:479] Verifying addon registry=true in "addons-594989"
	I1010 17:30:53.855392   10838 addons.go:479] Verifying addon metrics-server=true in "addons-594989"
	I1010 17:30:53.859251   10838 out.go:179] * Verifying ingress addon...
	I1010 17:30:53.859253   10838 out.go:179] * Verifying registry addon...
	I1010 17:30:53.859258   10838 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-594989 service yakd-dashboard -n yakd-dashboard
	
	I1010 17:30:53.861031   10838 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1010 17:30:53.861042   10838 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W1010 17:30:53.861614   10838 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1010 17:30:53.863569   10838 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1010 17:30:53.863688   10838 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1010 17:30:53.863705   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:30:54.222254   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:30:54.364651   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:30:54.364788   10838 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1010 17:30:54.364802   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:30:54.369727   10838 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.326734683s)
	W1010 17:30:54.369775   10838 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1010 17:30:54.369803   10838 retry.go:31] will retry after 352.099198ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1010 17:30:54.369981   10838 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.051290791s)
	I1010 17:30:54.370014   10838 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-594989"
	I1010 17:30:54.372517   10838 out.go:179] * Verifying csi-hostpath-driver addon...
	I1010 17:30:54.374478   10838 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1010 17:30:54.377671   10838 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1010 17:30:54.377691   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:30:54.722820   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1010 17:30:54.776473   10838 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:30:54.776507   10838 retry.go:31] will retry after 528.971148ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:30:54.864299   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:30:54.864437   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:30:54.877162   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:30:55.004421   10838 node_ready.go:57] node "addons-594989" has "Ready":"False" status (will retry)
	I1010 17:30:55.306241   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:30:55.364747   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:30:55.364875   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:30:55.377009   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:30:55.863333   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:30:55.863500   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:30:55.878080   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:30:56.364459   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:30:56.364610   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:30:56.376474   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:30:56.864231   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:30:56.864421   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:30:56.877031   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:30:57.194283   10838 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.47140917s)
	I1010 17:30:57.194342   10838 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.888063958s)
	W1010 17:30:57.194383   10838 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:30:57.194407   10838 retry.go:31] will retry after 693.738793ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:30:57.365132   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:30:57.365259   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:30:57.377209   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:30:57.504144   10838 node_ready.go:57] node "addons-594989" has "Ready":"False" status (will retry)
	I1010 17:30:57.863593   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:30:57.863703   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:30:57.876885   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:30:57.889011   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:30:58.363344   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:30:58.363403   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:30:58.377006   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:30:58.413710   10838 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:30:58.413738   10838 retry.go:31] will retry after 794.474194ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:30:58.864129   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:30:58.864316   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:30:58.876872   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:30:59.208781   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:30:59.364779   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:30:59.364863   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:30:59.377199   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:30:59.721887   10838 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:30:59.721916   10838 retry.go:31] will retry after 807.314696ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:30:59.863417   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:30:59.863563   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:30:59.877031   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:00.004491   10838 node_ready.go:57] node "addons-594989" has "Ready":"False" status (will retry)
	I1010 17:31:00.063367   10838 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1010 17:31:00.063439   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:31:00.080754   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:31:00.183900   10838 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1010 17:31:00.197922   10838 addons.go:238] Setting addon gcp-auth=true in "addons-594989"
	I1010 17:31:00.197987   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:31:00.198363   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:31:00.216450   10838 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1010 17:31:00.216498   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:31:00.233656   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:31:00.327665   10838 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1010 17:31:00.328901   10838 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1010 17:31:00.330157   10838 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1010 17:31:00.330174   10838 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1010 17:31:00.344192   10838 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1010 17:31:00.344211   10838 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1010 17:31:00.357424   10838 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1010 17:31:00.357441   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1010 17:31:00.364329   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:00.364485   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:00.371389   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1010 17:31:00.377577   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:00.530359   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:31:00.674223   10838 addons.go:479] Verifying addon gcp-auth=true in "addons-594989"
	I1010 17:31:00.675279   10838 out.go:179] * Verifying gcp-auth addon...
	I1010 17:31:00.676750   10838 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1010 17:31:00.678935   10838 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1010 17:31:00.678955   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:00.863473   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:00.863651   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:00.876438   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:01.068183   10838 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:31:01.068211   10838 retry.go:31] will retry after 1.293717217s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:31:01.179359   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:01.364335   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:01.364562   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:01.377288   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:01.679713   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:01.864346   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:01.864499   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:01.877935   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:02.180046   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:02.362320   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:31:02.364846   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:02.365041   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:02.377756   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:02.504445   10838 node_ready.go:57] node "addons-594989" has "Ready":"False" status (will retry)
	I1010 17:31:02.680630   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:02.864352   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:02.864530   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:02.878156   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:02.900507   10838 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:31:02.900533   10838 retry.go:31] will retry after 3.904162671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:31:03.179581   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:03.364541   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:03.364678   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:03.377160   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:03.679314   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:03.863901   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:03.864091   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:03.877469   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:04.179425   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:04.364144   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:04.364344   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:04.377925   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:04.504515   10838 node_ready.go:57] node "addons-594989" has "Ready":"False" status (will retry)
	I1010 17:31:04.680233   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:04.863775   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:04.863868   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:04.877031   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:05.180225   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:05.364252   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:05.364492   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:05.377760   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:05.680422   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:05.863745   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:05.863909   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:05.877114   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:06.180145   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:06.364636   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:06.364839   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:06.377005   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:06.680119   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:06.805297   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:31:06.864354   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:06.864577   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:06.879790   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:07.004311   10838 node_ready.go:57] node "addons-594989" has "Ready":"False" status (will retry)
	I1010 17:31:07.179715   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1010 17:31:07.340451   10838 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:31:07.340477   10838 retry.go:31] will retry after 2.748276662s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:31:07.364809   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:07.364990   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:07.377279   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:07.679577   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:07.864200   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:07.864412   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:07.877784   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:08.179925   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:08.364302   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:08.364505   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:08.377655   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:08.679774   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:08.864248   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:08.864459   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:08.877616   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:09.179736   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:09.364848   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:09.365019   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:09.377075   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:09.504644   10838 node_ready.go:57] node "addons-594989" has "Ready":"False" status (will retry)
	I1010 17:31:09.680432   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:09.864154   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:09.864347   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:09.877441   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:10.089264   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:31:10.180336   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:10.364130   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:10.364342   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:10.378175   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:10.622562   10838 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:31:10.622592   10838 retry.go:31] will retry after 5.588682028s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:31:10.679857   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:10.864483   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:10.864537   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:10.877468   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:11.179006   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:11.363939   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:11.363981   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:11.376980   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:11.679922   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:11.864635   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:11.864800   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:11.876618   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:12.004207   10838 node_ready.go:57] node "addons-594989" has "Ready":"False" status (will retry)
	I1010 17:31:12.179812   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:12.364448   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:12.364632   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:12.377644   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:12.679946   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:12.864469   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:12.864587   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:12.876766   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:13.179361   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:13.364251   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:13.364314   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:13.377222   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:13.679885   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:13.864402   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:13.864582   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:13.877520   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:14.179343   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:14.363832   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:14.363885   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:14.376920   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:14.504331   10838 node_ready.go:57] node "addons-594989" has "Ready":"False" status (will retry)
	I1010 17:31:14.679984   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:14.864314   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:14.864468   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:14.877867   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:15.179147   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:15.363905   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:15.363918   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:15.376809   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:15.679765   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:15.864383   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:15.864579   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:15.877533   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:16.179034   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:16.212153   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:31:16.364227   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:16.364321   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:16.377351   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:16.678997   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1010 17:31:16.728181   10838 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:31:16.728208   10838 retry.go:31] will retry after 5.366319964s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:31:16.863814   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:16.863959   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:16.877024   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:17.004460   10838 node_ready.go:57] node "addons-594989" has "Ready":"False" status (will retry)
	I1010 17:31:17.180133   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:17.363928   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:17.364140   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:17.377379   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:17.679231   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:17.863876   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:17.863896   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:17.876938   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:18.179713   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:18.364206   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:18.364352   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:18.377182   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:18.680290   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:18.863938   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:18.863976   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:18.876988   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:19.179661   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:19.364174   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:19.364251   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:19.376985   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:19.504381   10838 node_ready.go:57] node "addons-594989" has "Ready":"False" status (will retry)
	I1010 17:31:19.679789   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:19.864421   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:19.864544   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:19.877825   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:20.179540   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:20.364429   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:20.364524   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:20.377245   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:20.680091   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:20.863393   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:20.863574   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:20.877577   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:21.182300   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:21.363825   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:21.364068   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:21.376659   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:21.679439   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:21.864166   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:21.864214   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:21.877451   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:22.004659   10838 node_ready.go:57] node "addons-594989" has "Ready":"False" status (will retry)
	I1010 17:31:22.094857   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:31:22.179810   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:22.364615   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:22.364733   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:22.377362   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:22.612540   10838 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:31:22.612569   10838 retry.go:31] will retry after 9.056196227s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:31:22.679891   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:22.864503   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:22.864551   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:22.877479   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:23.178873   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:23.364646   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:23.364734   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:23.376458   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:23.679275   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:23.863820   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:23.864006   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:23.876728   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:24.179095   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:24.364688   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:24.364713   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:24.376673   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:24.504019   10838 node_ready.go:57] node "addons-594989" has "Ready":"False" status (will retry)
	I1010 17:31:24.679303   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:24.863840   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:24.863854   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:24.876734   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:25.179581   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:25.364229   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:25.364263   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:25.376812   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:25.680350   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:25.863789   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:25.863901   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:25.876739   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:26.179216   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:26.363743   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:26.363965   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:26.376610   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:26.679446   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:26.863896   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:26.864010   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:26.877259   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:27.004527   10838 node_ready.go:57] node "addons-594989" has "Ready":"False" status (will retry)
	I1010 17:31:27.180333   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:27.364129   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:27.364267   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:27.377594   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:27.679661   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:27.864364   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:27.864546   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:27.878093   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:28.179780   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:28.364548   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:28.364645   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:28.376759   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:28.679858   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:28.864225   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:28.864368   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:28.877424   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:29.179266   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:29.363683   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:29.363848   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:29.376955   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:29.504351   10838 node_ready.go:57] node "addons-594989" has "Ready":"False" status (will retry)
	I1010 17:31:29.679659   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:29.864172   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:29.864383   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:29.877602   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:30.179486   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:30.364112   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:30.364320   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:30.377312   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:30.680141   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:30.863748   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:30.863826   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:30.876711   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:31.179279   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:31.363919   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:31.364104   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:31.377309   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:31.504865   10838 node_ready.go:57] node "addons-594989" has "Ready":"False" status (will retry)
	I1010 17:31:31.669043   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:31:31.679802   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:31.864921   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:31.865281   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:31.877318   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:32.179514   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1010 17:31:32.197595   10838 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:31:32.197627   10838 retry.go:31] will retry after 31.007929001s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:31:32.364326   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:32.364523   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:32.377639   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:32.679748   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:32.864382   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:32.864526   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:32.877621   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:33.179576   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:33.364184   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:33.364416   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:33.377197   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:33.680337   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:33.864864   10838 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1010 17:31:33.864889   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:33.867102   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:33.879663   10838 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1010 17:31:33.879687   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:34.004743   10838 node_ready.go:49] node "addons-594989" is "Ready"
	I1010 17:31:34.004776   10838 node_ready.go:38] duration metric: took 41.003324473s for node "addons-594989" to be "Ready" ...
	I1010 17:31:34.004795   10838 api_server.go:52] waiting for apiserver process to appear ...
	I1010 17:31:34.004993   10838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 17:31:34.021879   10838 api_server.go:72] duration metric: took 41.68119327s to wait for apiserver process to appear ...
	I1010 17:31:34.021906   10838 api_server.go:88] waiting for apiserver healthz status ...
	I1010 17:31:34.021928   10838 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1010 17:31:34.026649   10838 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1010 17:31:34.027444   10838 api_server.go:141] control plane version: v1.34.1
	I1010 17:31:34.027477   10838 api_server.go:131] duration metric: took 5.564334ms to wait for apiserver health ...
	I1010 17:31:34.027488   10838 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 17:31:34.032705   10838 system_pods.go:59] 20 kube-system pods found
	I1010 17:31:34.032743   10838 system_pods.go:61] "amd-gpu-device-plugin-b5h8w" [fb6102b5-8464-4645-901a-f2f471fa6e63] Pending
	I1010 17:31:34.032760   10838 system_pods.go:61] "coredns-66bc5c9577-lpc4f" [b200196b-e5ba-474d-8cb8-3d2efaa0a804] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 17:31:34.032776   10838 system_pods.go:61] "csi-hostpath-attacher-0" [9b70703d-450d-4eb0-9ac8-149987429c8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1010 17:31:34.032791   10838 system_pods.go:61] "csi-hostpath-resizer-0" [4664e14f-70d0-44f8-a940-d484058aa2a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1010 17:31:34.032798   10838 system_pods.go:61] "csi-hostpathplugin-4g74f" [51c6b210-2fd8-4bf1-baa9-462eeb58c4ba] Pending
	I1010 17:31:34.032804   10838 system_pods.go:61] "etcd-addons-594989" [22899be9-2220-4a90-b3ac-dab0d5de26f6] Running
	I1010 17:31:34.032810   10838 system_pods.go:61] "kindnet-rbr7w" [eacdbf14-84fb-49cd-99f2-adc9b3e7914c] Running
	I1010 17:31:34.032815   10838 system_pods.go:61] "kube-apiserver-addons-594989" [2acd25d9-bff0-4093-bf22-15edb85febf2] Running
	I1010 17:31:34.032820   10838 system_pods.go:61] "kube-controller-manager-addons-594989" [e559c848-b52a-4078-a939-1ad3726dbef3] Running
	I1010 17:31:34.032832   10838 system_pods.go:61] "kube-ingress-dns-minikube" [99a30e52-981b-4bce-87c2-4aec7ec2120c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1010 17:31:34.032837   10838 system_pods.go:61] "kube-proxy-2st6b" [f1745076-7557-4cd8-9a96-b547386351a7] Running
	I1010 17:31:34.032850   10838 system_pods.go:61] "kube-scheduler-addons-594989" [dcda625c-1432-41ad-8a3b-733a797a7061] Running
	I1010 17:31:34.032861   10838 system_pods.go:61] "metrics-server-85b7d694d7-wccx5" [5b88af97-0885-40b1-acb2-8d58361a5fd0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 17:31:34.032874   10838 system_pods.go:61] "nvidia-device-plugin-daemonset-dlkfx" [a5f0e8a4-7957-414f-887d-b3cddab72a1e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1010 17:31:34.032886   10838 system_pods.go:61] "registry-66898fdd98-6gl8m" [7340a6ae-2ed3-4269-8f05-53911db0a12c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1010 17:31:34.032896   10838 system_pods.go:61] "registry-creds-764b6fb674-5k497" [b1404742-2d86-4ac9-91f6-3d70ff795aa1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1010 17:31:34.032906   10838 system_pods.go:61] "registry-proxy-8mr65" [9b638779-096b-4de7-a496-dfbca677f32f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1010 17:31:34.032917   10838 system_pods.go:61] "snapshot-controller-7d9fbc56b8-jt7fl" [fc6a34c5-3334-430b-9788-4218787bf9af] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1010 17:31:34.032927   10838 system_pods.go:61] "snapshot-controller-7d9fbc56b8-ktmdr" [06c7547b-8596-460f-90bd-a79685887c74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1010 17:31:34.032936   10838 system_pods.go:61] "storage-provisioner" [57838ac1-fa29-48b5-80ef-ff137e742296] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 17:31:34.032943   10838 system_pods.go:74] duration metric: took 5.448522ms to wait for pod list to return data ...
	I1010 17:31:34.032955   10838 default_sa.go:34] waiting for default service account to be created ...
	I1010 17:31:34.034806   10838 default_sa.go:45] found service account: "default"
	I1010 17:31:34.034821   10838 default_sa.go:55] duration metric: took 1.859126ms for default service account to be created ...
	I1010 17:31:34.034828   10838 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 17:31:34.039710   10838 system_pods.go:86] 20 kube-system pods found
	I1010 17:31:34.039738   10838 system_pods.go:89] "amd-gpu-device-plugin-b5h8w" [fb6102b5-8464-4645-901a-f2f471fa6e63] Pending
	I1010 17:31:34.039755   10838 system_pods.go:89] "coredns-66bc5c9577-lpc4f" [b200196b-e5ba-474d-8cb8-3d2efaa0a804] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 17:31:34.039764   10838 system_pods.go:89] "csi-hostpath-attacher-0" [9b70703d-450d-4eb0-9ac8-149987429c8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1010 17:31:34.039774   10838 system_pods.go:89] "csi-hostpath-resizer-0" [4664e14f-70d0-44f8-a940-d484058aa2a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1010 17:31:34.039797   10838 system_pods.go:89] "csi-hostpathplugin-4g74f" [51c6b210-2fd8-4bf1-baa9-462eeb58c4ba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1010 17:31:34.039803   10838 system_pods.go:89] "etcd-addons-594989" [22899be9-2220-4a90-b3ac-dab0d5de26f6] Running
	I1010 17:31:34.039810   10838 system_pods.go:89] "kindnet-rbr7w" [eacdbf14-84fb-49cd-99f2-adc9b3e7914c] Running
	I1010 17:31:34.039820   10838 system_pods.go:89] "kube-apiserver-addons-594989" [2acd25d9-bff0-4093-bf22-15edb85febf2] Running
	I1010 17:31:34.039825   10838 system_pods.go:89] "kube-controller-manager-addons-594989" [e559c848-b52a-4078-a939-1ad3726dbef3] Running
	I1010 17:31:34.039833   10838 system_pods.go:89] "kube-ingress-dns-minikube" [99a30e52-981b-4bce-87c2-4aec7ec2120c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1010 17:31:34.039838   10838 system_pods.go:89] "kube-proxy-2st6b" [f1745076-7557-4cd8-9a96-b547386351a7] Running
	I1010 17:31:34.039848   10838 system_pods.go:89] "kube-scheduler-addons-594989" [dcda625c-1432-41ad-8a3b-733a797a7061] Running
	I1010 17:31:34.039855   10838 system_pods.go:89] "metrics-server-85b7d694d7-wccx5" [5b88af97-0885-40b1-acb2-8d58361a5fd0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 17:31:34.039864   10838 system_pods.go:89] "nvidia-device-plugin-daemonset-dlkfx" [a5f0e8a4-7957-414f-887d-b3cddab72a1e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1010 17:31:34.039877   10838 system_pods.go:89] "registry-66898fdd98-6gl8m" [7340a6ae-2ed3-4269-8f05-53911db0a12c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1010 17:31:34.039885   10838 system_pods.go:89] "registry-creds-764b6fb674-5k497" [b1404742-2d86-4ac9-91f6-3d70ff795aa1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1010 17:31:34.039897   10838 system_pods.go:89] "registry-proxy-8mr65" [9b638779-096b-4de7-a496-dfbca677f32f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1010 17:31:34.039923   10838 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jt7fl" [fc6a34c5-3334-430b-9788-4218787bf9af] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1010 17:31:34.039981   10838 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ktmdr" [06c7547b-8596-460f-90bd-a79685887c74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1010 17:31:34.040007   10838 system_pods.go:89] "storage-provisioner" [57838ac1-fa29-48b5-80ef-ff137e742296] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 17:31:34.040024   10838 retry.go:31] will retry after 278.311892ms: missing components: kube-dns
	I1010 17:31:34.179698   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:34.324104   10838 system_pods.go:86] 20 kube-system pods found
	I1010 17:31:34.324146   10838 system_pods.go:89] "amd-gpu-device-plugin-b5h8w" [fb6102b5-8464-4645-901a-f2f471fa6e63] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1010 17:31:34.324157   10838 system_pods.go:89] "coredns-66bc5c9577-lpc4f" [b200196b-e5ba-474d-8cb8-3d2efaa0a804] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 17:31:34.324168   10838 system_pods.go:89] "csi-hostpath-attacher-0" [9b70703d-450d-4eb0-9ac8-149987429c8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1010 17:31:34.324176   10838 system_pods.go:89] "csi-hostpath-resizer-0" [4664e14f-70d0-44f8-a940-d484058aa2a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1010 17:31:34.324189   10838 system_pods.go:89] "csi-hostpathplugin-4g74f" [51c6b210-2fd8-4bf1-baa9-462eeb58c4ba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1010 17:31:34.324196   10838 system_pods.go:89] "etcd-addons-594989" [22899be9-2220-4a90-b3ac-dab0d5de26f6] Running
	I1010 17:31:34.324212   10838 system_pods.go:89] "kindnet-rbr7w" [eacdbf14-84fb-49cd-99f2-adc9b3e7914c] Running
	I1010 17:31:34.324227   10838 system_pods.go:89] "kube-apiserver-addons-594989" [2acd25d9-bff0-4093-bf22-15edb85febf2] Running
	I1010 17:31:34.324237   10838 system_pods.go:89] "kube-controller-manager-addons-594989" [e559c848-b52a-4078-a939-1ad3726dbef3] Running
	I1010 17:31:34.324250   10838 system_pods.go:89] "kube-ingress-dns-minikube" [99a30e52-981b-4bce-87c2-4aec7ec2120c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1010 17:31:34.324258   10838 system_pods.go:89] "kube-proxy-2st6b" [f1745076-7557-4cd8-9a96-b547386351a7] Running
	I1010 17:31:34.324265   10838 system_pods.go:89] "kube-scheduler-addons-594989" [dcda625c-1432-41ad-8a3b-733a797a7061] Running
	I1010 17:31:34.324276   10838 system_pods.go:89] "metrics-server-85b7d694d7-wccx5" [5b88af97-0885-40b1-acb2-8d58361a5fd0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 17:31:34.324296   10838 system_pods.go:89] "nvidia-device-plugin-daemonset-dlkfx" [a5f0e8a4-7957-414f-887d-b3cddab72a1e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1010 17:31:34.324305   10838 system_pods.go:89] "registry-66898fdd98-6gl8m" [7340a6ae-2ed3-4269-8f05-53911db0a12c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1010 17:31:34.324313   10838 system_pods.go:89] "registry-creds-764b6fb674-5k497" [b1404742-2d86-4ac9-91f6-3d70ff795aa1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1010 17:31:34.324321   10838 system_pods.go:89] "registry-proxy-8mr65" [9b638779-096b-4de7-a496-dfbca677f32f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1010 17:31:34.324328   10838 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jt7fl" [fc6a34c5-3334-430b-9788-4218787bf9af] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1010 17:31:34.324336   10838 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ktmdr" [06c7547b-8596-460f-90bd-a79685887c74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1010 17:31:34.324347   10838 system_pods.go:89] "storage-provisioner" [57838ac1-fa29-48b5-80ef-ff137e742296] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 17:31:34.324364   10838 retry.go:31] will retry after 344.769509ms: missing components: kube-dns
	I1010 17:31:34.423538   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:34.423660   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:34.423824   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:34.673857   10838 system_pods.go:86] 20 kube-system pods found
	I1010 17:31:34.673896   10838 system_pods.go:89] "amd-gpu-device-plugin-b5h8w" [fb6102b5-8464-4645-901a-f2f471fa6e63] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1010 17:31:34.673907   10838 system_pods.go:89] "coredns-66bc5c9577-lpc4f" [b200196b-e5ba-474d-8cb8-3d2efaa0a804] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 17:31:34.673918   10838 system_pods.go:89] "csi-hostpath-attacher-0" [9b70703d-450d-4eb0-9ac8-149987429c8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1010 17:31:34.673927   10838 system_pods.go:89] "csi-hostpath-resizer-0" [4664e14f-70d0-44f8-a940-d484058aa2a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1010 17:31:34.673938   10838 system_pods.go:89] "csi-hostpathplugin-4g74f" [51c6b210-2fd8-4bf1-baa9-462eeb58c4ba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1010 17:31:34.673944   10838 system_pods.go:89] "etcd-addons-594989" [22899be9-2220-4a90-b3ac-dab0d5de26f6] Running
	I1010 17:31:34.673950   10838 system_pods.go:89] "kindnet-rbr7w" [eacdbf14-84fb-49cd-99f2-adc9b3e7914c] Running
	I1010 17:31:34.673956   10838 system_pods.go:89] "kube-apiserver-addons-594989" [2acd25d9-bff0-4093-bf22-15edb85febf2] Running
	I1010 17:31:34.673962   10838 system_pods.go:89] "kube-controller-manager-addons-594989" [e559c848-b52a-4078-a939-1ad3726dbef3] Running
	I1010 17:31:34.673977   10838 system_pods.go:89] "kube-ingress-dns-minikube" [99a30e52-981b-4bce-87c2-4aec7ec2120c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1010 17:31:34.673984   10838 system_pods.go:89] "kube-proxy-2st6b" [f1745076-7557-4cd8-9a96-b547386351a7] Running
	I1010 17:31:34.673989   10838 system_pods.go:89] "kube-scheduler-addons-594989" [dcda625c-1432-41ad-8a3b-733a797a7061] Running
	I1010 17:31:34.673997   10838 system_pods.go:89] "metrics-server-85b7d694d7-wccx5" [5b88af97-0885-40b1-acb2-8d58361a5fd0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 17:31:34.674007   10838 system_pods.go:89] "nvidia-device-plugin-daemonset-dlkfx" [a5f0e8a4-7957-414f-887d-b3cddab72a1e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1010 17:31:34.674019   10838 system_pods.go:89] "registry-66898fdd98-6gl8m" [7340a6ae-2ed3-4269-8f05-53911db0a12c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1010 17:31:34.674030   10838 system_pods.go:89] "registry-creds-764b6fb674-5k497" [b1404742-2d86-4ac9-91f6-3d70ff795aa1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1010 17:31:34.674038   10838 system_pods.go:89] "registry-proxy-8mr65" [9b638779-096b-4de7-a496-dfbca677f32f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1010 17:31:34.674060   10838 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jt7fl" [fc6a34c5-3334-430b-9788-4218787bf9af] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1010 17:31:34.674075   10838 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ktmdr" [06c7547b-8596-460f-90bd-a79685887c74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1010 17:31:34.674083   10838 system_pods.go:89] "storage-provisioner" [57838ac1-fa29-48b5-80ef-ff137e742296] Running
	I1010 17:31:34.674093   10838 system_pods.go:126] duration metric: took 639.259847ms to wait for k8s-apps to be running ...
	I1010 17:31:34.674107   10838 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 17:31:34.674172   10838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 17:31:34.679588   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:34.687898   10838 system_svc.go:56] duration metric: took 13.786119ms WaitForService to wait for kubelet
	I1010 17:31:34.687924   10838 kubeadm.go:586] duration metric: took 42.347251677s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 17:31:34.687948   10838 node_conditions.go:102] verifying NodePressure condition ...
	I1010 17:31:34.690333   10838 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1010 17:31:34.690354   10838 node_conditions.go:123] node cpu capacity is 8
	I1010 17:31:34.690368   10838 node_conditions.go:105] duration metric: took 2.414779ms to run NodePressure ...
	I1010 17:31:34.690380   10838 start.go:241] waiting for startup goroutines ...
	I1010 17:31:34.864116   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:34.864242   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:34.877216   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:35.179637   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:35.364385   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:35.364526   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:35.465527   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:35.681028   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:35.865882   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:35.868880   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:35.878730   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:36.179746   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:36.364861   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:36.364974   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:36.377623   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:36.679628   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:36.864754   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:36.864930   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:36.877371   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:37.180121   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:37.365208   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:37.365382   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:37.378524   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:37.680273   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:37.864233   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:37.864273   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:37.878036   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:38.179458   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:38.365164   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:38.365482   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:38.378103   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:38.680130   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:38.865066   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:38.865109   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:38.878453   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:39.179729   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:39.364250   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:39.364404   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:39.380552   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:39.679134   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:39.864210   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:39.864252   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:39.877268   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:40.179768   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:40.364361   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:40.364425   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:40.377762   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:40.679336   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:40.863999   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:40.864205   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:40.877262   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:41.179800   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:41.364948   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:41.365177   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:41.377960   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:41.680087   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:41.864988   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:41.865136   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:41.876987   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:42.179503   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:42.364635   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:42.364787   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:42.377251   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:42.680826   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:42.864858   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:42.864985   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:42.877589   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:43.180329   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:43.363738   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:43.363911   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:43.376676   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:43.680406   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:43.864304   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:43.864386   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:43.877640   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:44.178901   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:44.364806   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:44.364859   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:44.376949   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:44.679774   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:44.864540   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:44.864571   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:44.878350   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:45.179506   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:45.364248   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:45.364247   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:45.377236   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:45.679944   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:45.864880   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:45.864949   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:45.877479   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:46.179802   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:46.364604   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:46.364650   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:46.376813   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:46.679948   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:46.865093   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:46.865175   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:46.878007   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:47.179502   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:47.364574   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:47.364638   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:47.377742   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:47.680459   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:47.864766   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:47.864803   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:47.876992   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:48.179541   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:48.364500   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:48.364689   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:48.377233   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:48.680843   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:48.864768   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:48.864819   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:48.876779   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:49.179354   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:49.364092   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:49.364122   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:49.377418   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:49.679511   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:49.864429   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:49.864604   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:49.877165   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:50.179814   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:50.365292   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:50.365359   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:50.378605   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:50.680140   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:50.864227   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:50.864293   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:50.877296   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:51.179640   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:51.364553   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:51.364863   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:51.377237   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:51.680257   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:51.864086   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:51.864132   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:51.877599   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:52.180164   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:52.363985   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:52.364092   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:52.377104   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:52.680303   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:52.864028   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:52.864034   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:52.877168   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:53.179466   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:53.363885   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:53.363983   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:53.376654   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:53.679251   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:53.863648   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:53.863680   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:53.876685   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:54.179765   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:54.364355   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:54.364369   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:54.377446   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:54.680849   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:54.866225   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:54.866382   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:54.879087   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:55.182909   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:55.365646   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:55.365824   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:55.378432   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:55.680463   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:55.865453   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:55.865548   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:55.878683   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:56.180230   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:56.364681   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:56.364765   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:56.378999   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:56.680675   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:56.864790   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:56.864958   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:56.877911   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:57.180564   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:57.364713   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:57.364759   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:57.377887   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:57.679725   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:57.936971   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:57.937204   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:57.937284   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:58.179856   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:58.364919   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:58.364949   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:58.378077   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:58.680279   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:58.864141   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:58.864317   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:58.877950   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:59.179955   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:59.364898   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:59.365365   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:59.377806   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:59.679339   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:59.864064   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:59.864255   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:59.877339   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:00.179971   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:00.365527   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:00.365556   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:00.377291   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:00.680349   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:00.864229   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:00.864259   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:00.878034   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:01.179563   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:01.364695   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:01.364734   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:01.377140   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:01.680110   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:01.863737   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:01.863859   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:01.877267   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:02.180215   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:02.363991   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:02.364078   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:02.377857   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:02.680117   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:02.865211   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:02.865419   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:02.878371   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:03.180172   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:03.206293   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:32:03.365313   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:03.365529   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:03.378024   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:03.679851   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1010 17:32:03.801261   10838 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:32:03.801292   10838 retry.go:31] will retry after 20.604988317s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:32:03.864307   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:03.864336   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:03.878514   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:04.180166   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:04.365025   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:04.365088   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:04.378292   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:04.679966   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:04.864643   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:04.864812   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:04.876903   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:05.179258   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:05.364537   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:05.364562   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:05.378104   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:05.680431   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:05.864456   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:05.864550   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:05.877625   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:06.180676   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:06.365244   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:06.365336   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:06.378454   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:06.680494   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:06.864775   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:06.864843   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:06.876952   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:07.179643   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:07.364558   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:07.364621   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:07.377890   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:07.679702   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:07.864547   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:07.864711   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:07.877705   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:08.180363   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:08.363958   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:08.364087   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:08.377419   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:08.679389   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:08.864242   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:08.864346   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:08.877601   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:09.180395   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:09.364128   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:09.364163   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:09.377424   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:09.680393   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:09.864046   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:09.864114   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:09.877166   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:10.179483   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:10.364481   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:10.364503   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:10.377580   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:10.680365   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:10.864077   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:10.864187   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:10.877210   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:11.179368   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:11.364360   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:11.364399   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:11.377797   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:11.679323   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:11.864166   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:11.864223   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:11.877515   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:12.180081   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:12.363636   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:12.363780   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:12.378089   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:12.680598   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:12.864322   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:12.864364   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:12.877776   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:13.179176   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:13.363749   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:13.363766   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:13.376880   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:13.679372   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:13.864089   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:13.864134   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:13.877143   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:14.179755   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:14.364637   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:14.364669   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:14.377657   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:14.679662   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:14.864192   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:14.864345   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:14.877692   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:15.180438   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:15.363854   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:15.363893   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:15.376872   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:15.679528   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:15.864369   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:15.864400   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:15.877869   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:16.179029   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:16.363646   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:16.363826   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:16.376990   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:16.679717   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:16.864610   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:16.864610   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:16.877769   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:17.178859   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:17.364531   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:17.364656   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:17.376872   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:17.679875   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:17.865068   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:17.865122   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:17.877509   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:18.180281   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:18.363786   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:18.363869   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:18.376937   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:18.679474   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:18.864319   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:18.864319   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:18.877328   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:19.180320   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:19.364572   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:19.364583   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:19.377730   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:19.679446   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:19.864605   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:19.864610   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:19.876533   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:20.179755   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:20.364683   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:20.364693   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:20.377873   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:20.680045   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:20.863572   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:20.863745   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:20.877009   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:21.179348   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:21.364392   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:21.364517   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:21.378236   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:21.680338   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:21.864167   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:21.864299   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:21.878079   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:22.180150   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:22.363779   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:22.363833   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:22.377452   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:22.680886   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:22.865108   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:22.865133   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:22.877864   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:23.179520   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:23.364292   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:23.364408   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:23.377663   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:23.680802   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:23.864919   10838 kapi.go:107] duration metric: took 1m30.003883582s to wait for kubernetes.io/minikube-addons=registry ...
	I1010 17:32:23.864953   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:23.878082   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:24.180238   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:24.365281   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:24.378180   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:24.407179   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:32:24.680859   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:24.865396   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:24.878889   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:32:25.139804   10838 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1010 17:32:25.139918   10838 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1010 17:32:25.179535   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:25.364479   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:25.378141   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:25.679118   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:25.864663   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:25.878656   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:26.179169   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:26.364762   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:26.377084   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:26.679950   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:26.864597   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:26.878152   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:27.179644   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:27.364239   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:27.377922   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:27.679650   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:27.865021   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:27.877914   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:28.180124   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:28.365023   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:28.377281   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:28.683835   10838 kapi.go:107] duration metric: took 1m28.007083421s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1010 17:32:28.686213   10838 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-594989 cluster.
	I1010 17:32:28.687939   10838 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1010 17:32:28.689482   10838 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1010 17:32:28.865129   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:28.877712   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:29.365521   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:29.379726   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:29.865019   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:29.877846   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:30.365037   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:30.377777   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:30.864926   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:30.877237   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:31.365638   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:31.378238   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:31.864453   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:31.877971   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:32.365110   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:32.378287   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:32.864596   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:32.877667   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:33.364314   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:33.377466   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:33.864983   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:33.877197   10838 kapi.go:107] duration metric: took 1m39.502719833s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1010 17:32:34.364547   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:34.864996   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:35.364595   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:35.865061   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:36.364954   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:36.864108   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:37.364461   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:37.865039   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:38.364399   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:38.867417   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:39.364949   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:39.863887   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:40.365017   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:40.869313   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:41.365525   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:41.865199   10838 kapi.go:107] duration metric: took 1m48.004153058s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1010 17:32:41.867540   10838 out.go:179] * Enabled addons: ingress-dns, registry-creds, nvidia-device-plugin, amd-gpu-device-plugin, cloud-spanner, storage-provisioner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1010 17:32:41.868505   10838 addons.go:514] duration metric: took 1m49.527783965s for enable addons: enabled=[ingress-dns registry-creds nvidia-device-plugin amd-gpu-device-plugin cloud-spanner storage-provisioner metrics-server yakd default-storageclass volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1010 17:32:41.868544   10838 start.go:246] waiting for cluster config update ...
	I1010 17:32:41.868560   10838 start.go:255] writing updated cluster config ...
	I1010 17:32:41.868777   10838 ssh_runner.go:195] Run: rm -f paused
	I1010 17:32:41.872602   10838 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 17:32:41.875236   10838 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lpc4f" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 17:32:41.878956   10838 pod_ready.go:94] pod "coredns-66bc5c9577-lpc4f" is "Ready"
	I1010 17:32:41.878972   10838 pod_ready.go:86] duration metric: took 3.719318ms for pod "coredns-66bc5c9577-lpc4f" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 17:32:41.880742   10838 pod_ready.go:83] waiting for pod "etcd-addons-594989" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 17:32:41.883875   10838 pod_ready.go:94] pod "etcd-addons-594989" is "Ready"
	I1010 17:32:41.883896   10838 pod_ready.go:86] duration metric: took 3.137197ms for pod "etcd-addons-594989" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 17:32:41.885470   10838 pod_ready.go:83] waiting for pod "kube-apiserver-addons-594989" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 17:32:41.888398   10838 pod_ready.go:94] pod "kube-apiserver-addons-594989" is "Ready"
	I1010 17:32:41.888415   10838 pod_ready.go:86] duration metric: took 2.929841ms for pod "kube-apiserver-addons-594989" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 17:32:41.889899   10838 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-594989" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 17:32:42.276344   10838 pod_ready.go:94] pod "kube-controller-manager-addons-594989" is "Ready"
	I1010 17:32:42.276371   10838 pod_ready.go:86] duration metric: took 386.456707ms for pod "kube-controller-manager-addons-594989" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 17:32:42.476275   10838 pod_ready.go:83] waiting for pod "kube-proxy-2st6b" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 17:32:42.876238   10838 pod_ready.go:94] pod "kube-proxy-2st6b" is "Ready"
	I1010 17:32:42.876265   10838 pod_ready.go:86] duration metric: took 399.963725ms for pod "kube-proxy-2st6b" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 17:32:43.076004   10838 pod_ready.go:83] waiting for pod "kube-scheduler-addons-594989" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 17:32:43.476342   10838 pod_ready.go:94] pod "kube-scheduler-addons-594989" is "Ready"
	I1010 17:32:43.476368   10838 pod_ready.go:86] duration metric: took 400.341354ms for pod "kube-scheduler-addons-594989" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 17:32:43.476377   10838 pod_ready.go:40] duration metric: took 1.603755753s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 17:32:43.519373   10838 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1010 17:32:43.521477   10838 out.go:179] * Done! kubectl is now configured to use "addons-594989" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 10 17:33:48 addons-594989 crio[799]: time="2025-10-10T17:33:48.242029416Z" level=info msg="Pulling image: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=da9a07ce-c289-42df-b64d-76455b7ca39b name=/runtime.v1.ImageService/PullImage
	Oct 10 17:33:48 addons-594989 crio[799]: time="2025-10-10T17:33:48.243562162Z" level=info msg="Trying to access \"docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605\""
	Oct 10 17:33:50 addons-594989 crio[799]: time="2025-10-10T17:33:50.19309017Z" level=info msg="Pulled image: docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=da9a07ce-c289-42df-b64d-76455b7ca39b name=/runtime.v1.ImageService/PullImage
	Oct 10 17:33:50 addons-594989 crio[799]: time="2025-10-10T17:33:50.193613601Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=2d3fd15d-09f1-45c6-8012-715f0f775950 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 17:33:50 addons-594989 crio[799]: time="2025-10-10T17:33:50.226699653Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=edd0cc20-1b36-4c47-8e27-3869eb528b40 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 17:33:50 addons-594989 crio[799]: time="2025-10-10T17:33:50.230546476Z" level=info msg="Creating container: kube-system/registry-creds-764b6fb674-5k497/registry-creds" id=ff156068-1d6b-46c9-b576-966ba4851f61 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 17:33:50 addons-594989 crio[799]: time="2025-10-10T17:33:50.231482474Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 17:33:50 addons-594989 crio[799]: time="2025-10-10T17:33:50.237806549Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 17:33:50 addons-594989 crio[799]: time="2025-10-10T17:33:50.238436752Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 17:33:50 addons-594989 crio[799]: time="2025-10-10T17:33:50.266945108Z" level=info msg="Created container b1a6b256b8c98af3e77b04a05b2a65c8cd4c474c6710af81414409c5d6ea1a6d: kube-system/registry-creds-764b6fb674-5k497/registry-creds" id=ff156068-1d6b-46c9-b576-966ba4851f61 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 17:33:50 addons-594989 crio[799]: time="2025-10-10T17:33:50.267493504Z" level=info msg="Starting container: b1a6b256b8c98af3e77b04a05b2a65c8cd4c474c6710af81414409c5d6ea1a6d" id=ef8cc980-ef45-4fe2-a01e-1500e331c155 name=/runtime.v1.RuntimeService/StartContainer
	Oct 10 17:33:50 addons-594989 crio[799]: time="2025-10-10T17:33:50.269318674Z" level=info msg="Started container" PID=9041 containerID=b1a6b256b8c98af3e77b04a05b2a65c8cd4c474c6710af81414409c5d6ea1a6d description=kube-system/registry-creds-764b6fb674-5k497/registry-creds id=ef8cc980-ef45-4fe2-a01e-1500e331c155 name=/runtime.v1.RuntimeService/StartContainer sandboxID=aad9f513b52dc37f021810e077c8a104964e082cfc60c8ffa508836677893e28
	Oct 10 17:35:31 addons-594989 crio[799]: time="2025-10-10T17:35:31.330775053Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-s54n2/POD" id=08e9cf35-43f7-4d77-95bc-06c549108044 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 10 17:35:31 addons-594989 crio[799]: time="2025-10-10T17:35:31.330864358Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 17:35:31 addons-594989 crio[799]: time="2025-10-10T17:35:31.337170818Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-s54n2 Namespace:default ID:ff914dbfdcdaa7e04cd9bd6523494682c2e0c6ee5f3e988e084e3dc3ccb5e52a UID:37875a9a-91c7-496b-92ef-286b3ae18f56 NetNS:/var/run/netns/dd7dfad8-1fdd-4731-90eb-96b9b5ff3e53 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00012c380}] Aliases:map[]}"
	Oct 10 17:35:31 addons-594989 crio[799]: time="2025-10-10T17:35:31.337208291Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-s54n2 to CNI network \"kindnet\" (type=ptp)"
	Oct 10 17:35:31 addons-594989 crio[799]: time="2025-10-10T17:35:31.346858329Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-s54n2 Namespace:default ID:ff914dbfdcdaa7e04cd9bd6523494682c2e0c6ee5f3e988e084e3dc3ccb5e52a UID:37875a9a-91c7-496b-92ef-286b3ae18f56 NetNS:/var/run/netns/dd7dfad8-1fdd-4731-90eb-96b9b5ff3e53 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00012c380}] Aliases:map[]}"
	Oct 10 17:35:31 addons-594989 crio[799]: time="2025-10-10T17:35:31.34701121Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-s54n2 for CNI network kindnet (type=ptp)"
	Oct 10 17:35:31 addons-594989 crio[799]: time="2025-10-10T17:35:31.347858953Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 10 17:35:31 addons-594989 crio[799]: time="2025-10-10T17:35:31.348695854Z" level=info msg="Ran pod sandbox ff914dbfdcdaa7e04cd9bd6523494682c2e0c6ee5f3e988e084e3dc3ccb5e52a with infra container: default/hello-world-app-5d498dc89-s54n2/POD" id=08e9cf35-43f7-4d77-95bc-06c549108044 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 10 17:35:31 addons-594989 crio[799]: time="2025-10-10T17:35:31.349897122Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=9e1cd525-910b-47c0-b38f-80ed690dae46 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 17:35:31 addons-594989 crio[799]: time="2025-10-10T17:35:31.350039602Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=9e1cd525-910b-47c0-b38f-80ed690dae46 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 17:35:31 addons-594989 crio[799]: time="2025-10-10T17:35:31.350106634Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=9e1cd525-910b-47c0-b38f-80ed690dae46 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 17:35:31 addons-594989 crio[799]: time="2025-10-10T17:35:31.350712167Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=7d02de1a-2fdd-4068-a5e2-1054761c85a1 name=/runtime.v1.ImageService/PullImage
	Oct 10 17:35:31 addons-594989 crio[799]: time="2025-10-10T17:35:31.362943832Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	b1a6b256b8c98       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago   Running             registry-creds                           0                   aad9f513b52dc       registry-creds-764b6fb674-5k497            kube-system
	b878e485b0b6d       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                                              2 minutes ago        Running             nginx                                    0                   1de2f80131e52       nginx                                      default
	1ed879b0d53cd       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago        Running             busybox                                  0                   03f62b61efde3       busybox                                    default
	e8d77cbefbfdb       registry.k8s.io/ingress-nginx/controller@sha256:cfcddeb96818021113c47ca3db866d083e80550444ed5f24fdc76f66911db270                             2 minutes ago        Running             controller                               0                   91522c7108af4       ingress-nginx-controller-9cc49f96f-szmc7   ingress-nginx
	5e95cdad96822       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago        Running             csi-snapshotter                          0                   0de4c981d5c38       csi-hostpathplugin-4g74f                   kube-system
	d699fc1ff60de       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago        Running             csi-provisioner                          0                   0de4c981d5c38       csi-hostpathplugin-4g74f                   kube-system
	f9378118d907d       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            3 minutes ago        Running             liveness-probe                           0                   0de4c981d5c38       csi-hostpathplugin-4g74f                   kube-system
	678a2f9830be7       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago        Running             hostpath                                 0                   0de4c981d5c38       csi-hostpathplugin-4g74f                   kube-system
	ad42a3a9aced0       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago        Running             node-driver-registrar                    0                   0de4c981d5c38       csi-hostpathplugin-4g74f                   kube-system
	11a226756b852       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 3 minutes ago        Running             gcp-auth                                 0                   eed2e5a4248c0       gcp-auth-78565c9fb4-nq7rp                  gcp-auth
	1adac014fd3a3       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            3 minutes ago        Running             gadget                                   0                   abbd9f78ebfd3       gadget-cntr6                               gadget
	b55d72508fae2       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago        Running             registry-proxy                           0                   6ec8e8ba0baa5       registry-proxy-8mr65                       kube-system
	901121a197604       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago        Running             amd-gpu-device-plugin                    0                   22a51adb6b88a       amd-gpu-device-plugin-b5h8w                kube-system
	78ce271903e84       8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65                                                                             3 minutes ago        Exited              patch                                    2                   d7eb0e5c33546       ingress-nginx-admission-patch-vvdlx        ingress-nginx
	071e94df1917c       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago        Running             csi-external-health-monitor-controller   0                   0de4c981d5c38       csi-hostpathplugin-4g74f                   kube-system
	0a23ff7a6d094       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:316cd3217236293ba00ab9b5eac4056b15d9ab870b3eeeeb99e0d9139a608aa3                   3 minutes ago        Exited              create                                   0                   988021fafc5da       ingress-nginx-admission-create-4djcf       ingress-nginx
	c9f157c863480       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago        Running             csi-attacher                             0                   4604e2fdc7888       csi-hostpath-attacher-0                    kube-system
	6031890f647ec       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago        Running             csi-resizer                              0                   884802be82161       csi-hostpath-resizer-0                     kube-system
	481780e193071       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago        Running             local-path-provisioner                   0                   3d564bd36f7bb       local-path-provisioner-648f6765c9-qr9vc    local-path-storage
	7325b7e01b366       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   e6bf113f7eb81       nvidia-device-plugin-daemonset-dlkfx       kube-system
	19825e2ee8b34       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   16b4cced62352       snapshot-controller-7d9fbc56b8-jt7fl       kube-system
	4ab90120209a5       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago        Running             yakd                                     0                   b428920959f95       yakd-dashboard-5ff678cb9-xsjmw             yakd-dashboard
	22fc52febdf0c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   13a0f1a77192a       snapshot-controller-7d9fbc56b8-ktmdr       kube-system
	cf1d9072746c6       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               3 minutes ago        Running             cloud-spanner-emulator                   0                   ec4a68a00cb3d       cloud-spanner-emulator-86bd5cbb97-55bl8    default
	b770cbeea4ac5       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago        Running             metrics-server                           0                   f529ade8c25da       metrics-server-85b7d694d7-wccx5            kube-system
	a6f2b6c587bcc       docker.io/library/registry@sha256:42be4a75b921489e51574e12889b71484a6524a02c4008c52c6f26ce30c7b990                                           3 minutes ago        Running             registry                                 0                   dd712ee62a05a       registry-66898fdd98-6gl8m                  kube-system
	0e80700c17777       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago        Running             minikube-ingress-dns                     0                   c387030f9c3e9       kube-ingress-dns-minikube                  kube-system
	8cab3f92e9e88       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago        Running             coredns                                  0                   2fbdd5e041e28       coredns-66bc5c9577-lpc4f                   kube-system
	8cb0bc1946c2e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago        Running             storage-provisioner                      0                   c8086bebda554       storage-provisioner                        kube-system
	a664c4cd86a07       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago        Running             kindnet-cni                              0                   034b9c6ba72c6       kindnet-rbr7w                              kube-system
	4f4668380d008       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago        Running             kube-proxy                               0                   d1bcbc8fa6936       kube-proxy-2st6b                           kube-system
	c1f6da858e936       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago        Running             kube-apiserver                           0                   0e883274ec80a       kube-apiserver-addons-594989               kube-system
	03911015ab5c0       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago        Running             kube-controller-manager                  0                   4b5365310a2c7       kube-controller-manager-addons-594989      kube-system
	8643869dd690c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago        Running             etcd                                     0                   ab90261f3deb4       etcd-addons-594989                         kube-system
	426cb7351d8b7       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago        Running             kube-scheduler                           0                   71615bf9c7690       kube-scheduler-addons-594989               kube-system
	
	
	==> coredns [8cab3f92e9e88acd6ccdda17457d84c2208a2db36f61d1769c76a89b89d5c06c] <==
	[INFO] 10.244.0.21:58354 - 11892 "A IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.00773583s
	[INFO] 10.244.0.21:59984 - 55269 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004766072s
	[INFO] 10.244.0.21:60345 - 21079 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.007612186s
	[INFO] 10.244.0.21:37302 - 42381 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004734411s
	[INFO] 10.244.0.21:48806 - 44788 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004884932s
	[INFO] 10.244.0.21:51349 - 64954 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000869586s
	[INFO] 10.244.0.21:59551 - 47721 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001670272s
	[INFO] 10.244.0.26:38212 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000238756s
	[INFO] 10.244.0.26:44611 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000159106s
	[INFO] 10.244.0.31:42732 - 43257 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000193334s
	[INFO] 10.244.0.31:44267 - 22087 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000307552s
	[INFO] 10.244.0.31:49215 - 27461 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000098685s
	[INFO] 10.244.0.31:48388 - 63876 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000113723s
	[INFO] 10.244.0.31:36776 - 6616 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.00009496s
	[INFO] 10.244.0.31:33272 - 39322 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000135436s
	[INFO] 10.244.0.31:39767 - 28171 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.002789225s
	[INFO] 10.244.0.31:60079 - 58257 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.003163147s
	[INFO] 10.244.0.31:48001 - 59890 "AAAA IN accounts.google.com.europe-west4-a.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.004698653s
	[INFO] 10.244.0.31:33404 - 29481 "A IN accounts.google.com.europe-west4-a.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.005185879s
	[INFO] 10.244.0.31:58384 - 11396 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.003804311s
	[INFO] 10.244.0.31:47906 - 50049 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005364018s
	[INFO] 10.244.0.31:59124 - 39893 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004038023s
	[INFO] 10.244.0.31:34046 - 41932 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004354838s
	[INFO] 10.244.0.31:49477 - 26659 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001510114s
	[INFO] 10.244.0.31:48160 - 30507 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001543157s
	
	
	==> describe nodes <==
	Name:               addons-594989
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-594989
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad692bf4ab89f0e135b80e730ae25010479ecc46
	                    minikube.k8s.io/name=addons-594989
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_10T17_30_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-594989
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-594989"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 10 Oct 2025 17:30:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-594989
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 10 Oct 2025 17:35:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 10 Oct 2025 17:34:22 +0000   Fri, 10 Oct 2025 17:30:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 10 Oct 2025 17:34:22 +0000   Fri, 10 Oct 2025 17:30:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 10 Oct 2025 17:34:22 +0000   Fri, 10 Oct 2025 17:30:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 10 Oct 2025 17:34:22 +0000   Fri, 10 Oct 2025 17:31:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-594989
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 6694834041ede3e9eb1b67e168e90e0c
	  System UUID:                bc1320a3-f798-4c40-8baa-c6409dc2b259
	  Boot ID:                    830c8438-99e6-48ba-b543-66e651cad0c8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m48s
	  default                     cloud-spanner-emulator-86bd5cbb97-55bl8     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  default                     hello-world-app-5d498dc89-s54n2             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  gadget                      gadget-cntr6                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  gcp-auth                    gcp-auth-78565c9fb4-nq7rp                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m32s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-szmc7    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m39s
	  kube-system                 amd-gpu-device-plugin-b5h8w                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 coredns-66bc5c9577-lpc4f                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m40s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 csi-hostpathplugin-4g74f                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 etcd-addons-594989                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m46s
	  kube-system                 kindnet-rbr7w                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m41s
	  kube-system                 kube-apiserver-addons-594989                250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 kube-controller-manager-addons-594989       200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 kube-proxy-2st6b                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 kube-scheduler-addons-594989                100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 metrics-server-85b7d694d7-wccx5             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m39s
	  kube-system                 nvidia-device-plugin-daemonset-dlkfx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 registry-66898fdd98-6gl8m                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 registry-creds-764b6fb674-5k497             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 registry-proxy-8mr65                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 snapshot-controller-7d9fbc56b8-jt7fl        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 snapshot-controller-7d9fbc56b8-ktmdr        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  local-path-storage          local-path-provisioner-648f6765c9-qr9vc     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-xsjmw              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m39s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m51s (x8 over 4m52s)  kubelet          Node addons-594989 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m51s (x8 over 4m52s)  kubelet          Node addons-594989 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m51s (x8 over 4m52s)  kubelet          Node addons-594989 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m46s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m46s                  kubelet          Node addons-594989 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m46s                  kubelet          Node addons-594989 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m46s                  kubelet          Node addons-594989 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m42s                  node-controller  Node addons-594989 event: Registered Node addons-594989 in Controller
	  Normal  NodeReady                3m59s                  kubelet          Node addons-594989 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.077121] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.021628] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.602398] kauditd_printk_skb: 47 callbacks suppressed
	[Oct10 17:33] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[  +1.057549] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[  +1.023904] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[  +1.023945] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[  +1.024888] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[  +1.022912] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[  +2.047862] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[  +4.031726] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[  +8.191358] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[ +16.382802] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[Oct10 17:34] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	
	
	==> etcd [8643869dd690c538be7e9ae88ed91a5133d80d777f7f05864080e0071de6ce07] <==
	{"level":"warn","ts":"2025-10-10T17:30:43.473662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:30:43.479066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:30:43.484576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:30:43.490240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:30:43.501834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:30:43.507447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:30:43.514355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:30:43.520386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:30:43.526042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:30:43.531699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:30:43.548018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:30:43.553808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:30:43.560179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:30:43.608486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:30:54.944092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:30:54.950362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:31:20.982396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:31:20.989357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:31:21.004912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:31:21.012218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55068","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-10T17:31:57.848435Z","caller":"traceutil/trace.go:172","msg":"trace[880933500] transaction","detail":"{read_only:false; response_revision:1066; number_of_response:1; }","duration":"109.517609ms","start":"2025-10-10T17:31:57.738902Z","end":"2025-10-10T17:31:57.848420Z","steps":["trace[880933500] 'process raft request'  (duration: 109.407338ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-10T17:32:40.992608Z","caller":"traceutil/trace.go:172","msg":"trace[1190951252] transaction","detail":"{read_only:false; response_revision:1272; number_of_response:1; }","duration":"121.840409ms","start":"2025-10-10T17:32:40.870748Z","end":"2025-10-10T17:32:40.992589Z","steps":["trace[1190951252] 'process raft request'  (duration: 121.726607ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-10T17:33:07.347939Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.115119ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-10T17:33:07.348040Z","caller":"traceutil/trace.go:172","msg":"trace[1457177052] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1408; }","duration":"121.241885ms","start":"2025-10-10T17:33:07.226781Z","end":"2025-10-10T17:33:07.348023Z","steps":["trace[1457177052] 'agreement among raft nodes before linearized reading'  (duration: 54.360237ms)","trace[1457177052] 'range keys from in-memory index tree'  (duration: 66.721654ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-10T17:33:07.348215Z","caller":"traceutil/trace.go:172","msg":"trace[1464348081] transaction","detail":"{read_only:false; response_revision:1409; number_of_response:1; }","duration":"124.742528ms","start":"2025-10-10T17:33:07.223438Z","end":"2025-10-10T17:33:07.348181Z","steps":["trace[1464348081] 'process raft request'  (duration: 57.741751ms)","trace[1464348081] 'compare'  (duration: 66.752935ms)"],"step_count":2}
	
	
	==> gcp-auth [11a226756b852a088f5077fda3513adc69259071ed3ae91bb0c3d326dd67d983] <==
	2025/10/10 17:32:28 GCP Auth Webhook started!
	2025/10/10 17:32:43 Ready to marshal response ...
	2025/10/10 17:32:43 Ready to write response ...
	2025/10/10 17:32:44 Ready to marshal response ...
	2025/10/10 17:32:44 Ready to write response ...
	2025/10/10 17:32:44 Ready to marshal response ...
	2025/10/10 17:32:44 Ready to write response ...
	2025/10/10 17:32:58 Ready to marshal response ...
	2025/10/10 17:32:58 Ready to write response ...
	2025/10/10 17:33:01 Ready to marshal response ...
	2025/10/10 17:33:01 Ready to write response ...
	2025/10/10 17:33:01 Ready to marshal response ...
	2025/10/10 17:33:01 Ready to write response ...
	2025/10/10 17:33:04 Ready to marshal response ...
	2025/10/10 17:33:04 Ready to write response ...
	2025/10/10 17:33:05 Ready to marshal response ...
	2025/10/10 17:33:05 Ready to write response ...
	2025/10/10 17:33:13 Ready to marshal response ...
	2025/10/10 17:33:13 Ready to write response ...
	2025/10/10 17:33:21 Ready to marshal response ...
	2025/10/10 17:33:21 Ready to write response ...
	2025/10/10 17:35:31 Ready to marshal response ...
	2025/10/10 17:35:31 Ready to write response ...
	
	
	==> kernel <==
	 17:35:32 up 17 min,  0 user,  load average: 0.41, 0.63, 0.34
	Linux addons-594989 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a664c4cd86a07ae3da31b7161b1ffcb861502990a087baf33bb177718c331505] <==
	I1010 17:33:23.286833       1 main.go:301] handling current node
	I1010 17:33:33.286307       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:33:33.286365       1 main.go:301] handling current node
	I1010 17:33:43.286680       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:33:43.286710       1 main.go:301] handling current node
	I1010 17:33:53.286133       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:33:53.286171       1 main.go:301] handling current node
	I1010 17:34:03.286118       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:34:03.286151       1 main.go:301] handling current node
	I1010 17:34:13.286196       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:34:13.286228       1 main.go:301] handling current node
	I1010 17:34:23.286563       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:34:23.286590       1 main.go:301] handling current node
	I1010 17:34:33.290534       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:34:33.290570       1 main.go:301] handling current node
	I1010 17:34:43.286605       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:34:43.286649       1 main.go:301] handling current node
	I1010 17:34:53.286469       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:34:53.286497       1 main.go:301] handling current node
	I1010 17:35:03.290766       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:35:03.290799       1 main.go:301] handling current node
	I1010 17:35:13.286113       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:35:13.286147       1 main.go:301] handling current node
	I1010 17:35:23.293187       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:35:23.293224       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c1f6da858e936ea82a29d1bd82704a1271d21a8c3ef131087c8a1ffc041f909d] <==
	W1010 17:31:21.012204       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1010 17:31:33.854408       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.194.163:443: connect: connection refused
	E1010 17:31:33.854457       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.194.163:443: connect: connection refused" logger="UnhandledError"
	W1010 17:31:33.854478       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.194.163:443: connect: connection refused
	E1010 17:31:33.854513       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.194.163:443: connect: connection refused" logger="UnhandledError"
	W1010 17:31:33.873138       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.194.163:443: connect: connection refused
	E1010 17:31:33.873174       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.194.163:443: connect: connection refused" logger="UnhandledError"
	W1010 17:31:33.873146       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.194.163:443: connect: connection refused
	E1010 17:31:33.873279       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.194.163:443: connect: connection refused" logger="UnhandledError"
	W1010 17:31:44.407623       1 handler_proxy.go:99] no RequestInfo found in the context
	E1010 17:31:44.407699       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1010 17:31:44.407751       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.205.41:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.205.41:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.205.41:443: connect: connection refused" logger="UnhandledError"
	E1010 17:31:44.409180       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.205.41:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.205.41:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.205.41:443: connect: connection refused" logger="UnhandledError"
	E1010 17:31:44.414920       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.205.41:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.205.41:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.205.41:443: connect: connection refused" logger="UnhandledError"
	E1010 17:31:44.435650       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.205.41:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.205.41:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.205.41:443: connect: connection refused" logger="UnhandledError"
	I1010 17:31:44.502672       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1010 17:32:54.199837       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54012: use of closed network connection
	E1010 17:32:54.347926       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54034: use of closed network connection
	I1010 17:33:05.335961       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1010 17:33:05.510137       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.169.27"}
	I1010 17:33:08.363204       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1010 17:35:31.095085       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.104.33"}
	
	
	==> kube-controller-manager [03911015ab5c009ca66dea098febf11b6b587cc8665f0cf85bf894a7d24caf04] <==
	I1010 17:30:50.965688       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1010 17:30:50.965695       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1010 17:30:50.965926       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1010 17:30:50.965968       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1010 17:30:50.965994       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1010 17:30:50.966191       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1010 17:30:50.966226       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1010 17:30:50.966294       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1010 17:30:50.966567       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1010 17:30:50.968165       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1010 17:30:50.968280       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1010 17:30:50.968366       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1010 17:30:50.969331       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1010 17:30:50.973225       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1010 17:30:50.975460       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1010 17:30:50.979704       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1010 17:30:50.984937       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1010 17:31:20.976991       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 17:31:20.977170       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1010 17:31:20.977206       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1010 17:31:20.995575       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1010 17:31:20.999441       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1010 17:31:21.078240       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1010 17:31:21.099832       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1010 17:31:35.920817       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4f4668380d0085c1200a82058cc3e69994ce54d202cd46003f4aeb1592745336] <==
	I1010 17:30:52.842520       1 server_linux.go:53] "Using iptables proxy"
	I1010 17:30:53.242118       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1010 17:30:53.343278       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1010 17:30:53.343387       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1010 17:30:53.343511       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1010 17:30:53.517826       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1010 17:30:53.517976       1 server_linux.go:132] "Using iptables Proxier"
	I1010 17:30:53.528718       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1010 17:30:53.537312       1 server.go:527] "Version info" version="v1.34.1"
	I1010 17:30:53.537552       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 17:30:53.539916       1 config.go:200] "Starting service config controller"
	I1010 17:30:53.540002       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1010 17:30:53.540064       1 config.go:106] "Starting endpoint slice config controller"
	I1010 17:30:53.541365       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1010 17:30:53.540527       1 config.go:403] "Starting serviceCIDR config controller"
	I1010 17:30:53.541450       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1010 17:30:53.541071       1 config.go:309] "Starting node config controller"
	I1010 17:30:53.541499       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1010 17:30:53.541524       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1010 17:30:53.640370       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1010 17:30:53.641583       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1010 17:30:53.641658       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [426cb7351d8b7ffa8dc04159ce227020bab5b313130d3d7ea54e381c5e1ff403] <==
	E1010 17:30:43.997541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1010 17:30:43.997570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1010 17:30:43.997609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1010 17:30:43.997627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1010 17:30:43.997672       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1010 17:30:43.997676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1010 17:30:43.996546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1010 17:30:43.997682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1010 17:30:43.997698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1010 17:30:43.997794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1010 17:30:43.997847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1010 17:30:43.997852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1010 17:30:44.885991       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1010 17:30:45.044912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1010 17:30:45.069471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1010 17:30:45.088704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1010 17:30:45.091700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1010 17:30:45.107907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1010 17:30:45.109733       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1010 17:30:45.122661       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1010 17:30:45.172771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1010 17:30:45.186692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1010 17:30:45.191574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1010 17:30:45.199796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1010 17:30:48.194410       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 10 17:33:29 addons-594989 kubelet[1334]: I1010 17:33:29.476074    1334 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2792174a-3c8a-4d5a-b0b0-614559fd4064-gcp-creds\") on node \"addons-594989\" DevicePath \"\""
	Oct 10 17:33:29 addons-594989 kubelet[1334]: I1010 17:33:29.477888    1334 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2792174a-3c8a-4d5a-b0b0-614559fd4064-kube-api-access-fjqkm" (OuterVolumeSpecName: "kube-api-access-fjqkm") pod "2792174a-3c8a-4d5a-b0b0-614559fd4064" (UID: "2792174a-3c8a-4d5a-b0b0-614559fd4064"). InnerVolumeSpecName "kube-api-access-fjqkm". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 10 17:33:29 addons-594989 kubelet[1334]: I1010 17:33:29.478898    1334 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^3732d871-a5ff-11f0-9a83-7a8f2f6fbfaf" (OuterVolumeSpecName: "task-pv-storage") pod "2792174a-3c8a-4d5a-b0b0-614559fd4064" (UID: "2792174a-3c8a-4d5a-b0b0-614559fd4064"). InnerVolumeSpecName "pvc-0630308e-e180-4f79-8a71-af698ee4e838". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Oct 10 17:33:29 addons-594989 kubelet[1334]: I1010 17:33:29.577306    1334 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fjqkm\" (UniqueName: \"kubernetes.io/projected/2792174a-3c8a-4d5a-b0b0-614559fd4064-kube-api-access-fjqkm\") on node \"addons-594989\" DevicePath \"\""
	Oct 10 17:33:29 addons-594989 kubelet[1334]: I1010 17:33:29.577356    1334 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-0630308e-e180-4f79-8a71-af698ee4e838\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^3732d871-a5ff-11f0-9a83-7a8f2f6fbfaf\") on node \"addons-594989\" "
	Oct 10 17:33:29 addons-594989 kubelet[1334]: I1010 17:33:29.581524    1334 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-0630308e-e180-4f79-8a71-af698ee4e838" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^3732d871-a5ff-11f0-9a83-7a8f2f6fbfaf") on node "addons-594989"
	Oct 10 17:33:29 addons-594989 kubelet[1334]: I1010 17:33:29.678223    1334 reconciler_common.go:299] "Volume detached for volume \"pvc-0630308e-e180-4f79-8a71-af698ee4e838\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^3732d871-a5ff-11f0-9a83-7a8f2f6fbfaf\") on node \"addons-594989\" DevicePath \"\""
	Oct 10 17:33:29 addons-594989 kubelet[1334]: I1010 17:33:29.845377    1334 scope.go:117] "RemoveContainer" containerID="329647ebcb26a3bac2fc82b540d84a1c94fd7bd7aaef14b71346f6693cc5749a"
	Oct 10 17:33:29 addons-594989 kubelet[1334]: I1010 17:33:29.858525    1334 scope.go:117] "RemoveContainer" containerID="329647ebcb26a3bac2fc82b540d84a1c94fd7bd7aaef14b71346f6693cc5749a"
	Oct 10 17:33:29 addons-594989 kubelet[1334]: E1010 17:33:29.859036    1334 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"329647ebcb26a3bac2fc82b540d84a1c94fd7bd7aaef14b71346f6693cc5749a\": container with ID starting with 329647ebcb26a3bac2fc82b540d84a1c94fd7bd7aaef14b71346f6693cc5749a not found: ID does not exist" containerID="329647ebcb26a3bac2fc82b540d84a1c94fd7bd7aaef14b71346f6693cc5749a"
	Oct 10 17:33:29 addons-594989 kubelet[1334]: I1010 17:33:29.859650    1334 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"329647ebcb26a3bac2fc82b540d84a1c94fd7bd7aaef14b71346f6693cc5749a"} err="failed to get container status \"329647ebcb26a3bac2fc82b540d84a1c94fd7bd7aaef14b71346f6693cc5749a\": rpc error: code = NotFound desc = could not find container \"329647ebcb26a3bac2fc82b540d84a1c94fd7bd7aaef14b71346f6693cc5749a\": container with ID starting with 329647ebcb26a3bac2fc82b540d84a1c94fd7bd7aaef14b71346f6693cc5749a not found: ID does not exist"
	Oct 10 17:33:30 addons-594989 kubelet[1334]: I1010 17:33:30.221920    1334 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-b5h8w" secret="" err="secret \"gcp-auth\" not found"
	Oct 10 17:33:30 addons-594989 kubelet[1334]: I1010 17:33:30.224726    1334 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2792174a-3c8a-4d5a-b0b0-614559fd4064" path="/var/lib/kubelet/pods/2792174a-3c8a-4d5a-b0b0-614559fd4064/volumes"
	Oct 10 17:33:36 addons-594989 kubelet[1334]: E1010 17:33:36.874036    1334 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-5k497" podUID="b1404742-2d86-4ac9-91f6-3d70ff795aa1"
	Oct 10 17:33:46 addons-594989 kubelet[1334]: I1010 17:33:46.229253    1334 scope.go:117] "RemoveContainer" containerID="fc03eed646aae5bb4dfd1b435c6a1c0af1e2c020db2279942bb9791f73279840"
	Oct 10 17:33:46 addons-594989 kubelet[1334]: I1010 17:33:46.238504    1334 scope.go:117] "RemoveContainer" containerID="8ba39728fd822034a15dc330af9ccb356c59b378838d7ab93f6eed35ae387e02"
	Oct 10 17:33:46 addons-594989 kubelet[1334]: I1010 17:33:46.245349    1334 scope.go:117] "RemoveContainer" containerID="4e19497fb6c56c6eeba442234af9fc09bc38f96d4c3fe616d6da657b53a1a434"
	Oct 10 17:33:46 addons-594989 kubelet[1334]: I1010 17:33:46.251808    1334 scope.go:117] "RemoveContainer" containerID="8e7d3edfb744ae9a16543c173f51b6c85808a3131074a4da38a4f6becb0f84b3"
	Oct 10 17:33:46 addons-594989 kubelet[1334]: I1010 17:33:46.258671    1334 scope.go:117] "RemoveContainer" containerID="87b706c50651c138281a676ec47b8ed40d32734c59bbbc2a790918f25182fcca"
	Oct 10 17:33:50 addons-594989 kubelet[1334]: I1010 17:33:50.935590    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-5k497" podStartSLOduration=175.98264844 podStartE2EDuration="2m57.935572441s" podCreationTimestamp="2025-10-10 17:30:53 +0000 UTC" firstStartedPulling="2025-10-10 17:33:48.241733271 +0000 UTC m=+182.100529459" lastFinishedPulling="2025-10-10 17:33:50.194657264 +0000 UTC m=+184.053453460" observedRunningTime="2025-10-10 17:33:50.93478364 +0000 UTC m=+184.793579855" watchObservedRunningTime="2025-10-10 17:33:50.935572441 +0000 UTC m=+184.794368646"
	Oct 10 17:34:30 addons-594989 kubelet[1334]: I1010 17:34:30.222298    1334 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-dlkfx" secret="" err="secret \"gcp-auth\" not found"
	Oct 10 17:34:43 addons-594989 kubelet[1334]: I1010 17:34:43.221668    1334 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-8mr65" secret="" err="secret \"gcp-auth\" not found"
	Oct 10 17:34:52 addons-594989 kubelet[1334]: I1010 17:34:52.222259    1334 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-b5h8w" secret="" err="secret \"gcp-auth\" not found"
	Oct 10 17:35:31 addons-594989 kubelet[1334]: I1010 17:35:31.136919    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/37875a9a-91c7-496b-92ef-286b3ae18f56-gcp-creds\") pod \"hello-world-app-5d498dc89-s54n2\" (UID: \"37875a9a-91c7-496b-92ef-286b3ae18f56\") " pod="default/hello-world-app-5d498dc89-s54n2"
	Oct 10 17:35:31 addons-594989 kubelet[1334]: I1010 17:35:31.137017    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxzjz\" (UniqueName: \"kubernetes.io/projected/37875a9a-91c7-496b-92ef-286b3ae18f56-kube-api-access-vxzjz\") pod \"hello-world-app-5d498dc89-s54n2\" (UID: \"37875a9a-91c7-496b-92ef-286b3ae18f56\") " pod="default/hello-world-app-5d498dc89-s54n2"
	
	
	==> storage-provisioner [8cb0bc1946c2ed4da67ec55c8c6c99b35b9087ba2260de09913804f39b37e9aa] <==
	W1010 17:35:07.250499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:35:09.252962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:35:09.256617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:35:11.259151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:35:11.262657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:35:13.265214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:35:13.268662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:35:15.271475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:35:15.274880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:35:17.277399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:35:17.280966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:35:19.283200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:35:19.286505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:35:21.289030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:35:21.294552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:35:23.297156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:35:23.301886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:35:25.305143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:35:25.308973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:35:27.312335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:35:27.315867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:35:29.318182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:35:29.322694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:35:31.325650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:35:31.330161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-594989 -n addons-594989
helpers_test.go:269: (dbg) Run:  kubectl --context addons-594989 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-4djcf ingress-nginx-admission-patch-vvdlx
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-594989 describe pod ingress-nginx-admission-create-4djcf ingress-nginx-admission-patch-vvdlx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-594989 describe pod ingress-nginx-admission-create-4djcf ingress-nginx-admission-patch-vvdlx: exit status 1 (54.413999ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-4djcf" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-vvdlx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-594989 describe pod ingress-nginx-admission-create-4djcf ingress-nginx-admission-patch-vvdlx: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-594989 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-594989 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (231.638529ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 17:35:33.539530   25604 out.go:360] Setting OutFile to fd 1 ...
	I1010 17:35:33.539869   25604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:35:33.539882   25604 out.go:374] Setting ErrFile to fd 2...
	I1010 17:35:33.539888   25604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:35:33.540201   25604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 17:35:33.540455   25604 mustload.go:65] Loading cluster: addons-594989
	I1010 17:35:33.540767   25604 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:35:33.540781   25604 addons.go:606] checking whether the cluster is paused
	I1010 17:35:33.540853   25604 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:35:33.540864   25604 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:35:33.541229   25604 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:35:33.559923   25604 ssh_runner.go:195] Run: systemctl --version
	I1010 17:35:33.559975   25604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:35:33.580094   25604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:35:33.676767   25604 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 17:35:33.676861   25604 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 17:35:33.707463   25604 cri.go:89] found id: "b1a6b256b8c98af3e77b04a05b2a65c8cd4c474c6710af81414409c5d6ea1a6d"
	I1010 17:35:33.707490   25604 cri.go:89] found id: "5e95cdad968221cb4aa3e3f82adc548bb3d5b365829bff504b6b9205dce0e7fd"
	I1010 17:35:33.707495   25604 cri.go:89] found id: "d699fc1ff60deb831fc7ac36084436101b2b6f7f34bc49bb7395f67303eddd87"
	I1010 17:35:33.707497   25604 cri.go:89] found id: "f9378118d907d056ea9eef46a4fc61abb3f92e4f6c26d4b923dfdde2abf957d2"
	I1010 17:35:33.707500   25604 cri.go:89] found id: "678a2f9830be76f38c2b341c8e72305078d85125c2e54db3f06c6813bd0a0d9a"
	I1010 17:35:33.707503   25604 cri.go:89] found id: "ad42a3a9aced05fb033bff70ddc9ef71d5b204c2f932be8648ba95c3746e71c2"
	I1010 17:35:33.707505   25604 cri.go:89] found id: "b55d72508fae28446486a54619cff08b4afb5c385f6a5e4eac89e3cfebc91592"
	I1010 17:35:33.707508   25604 cri.go:89] found id: "901121a197604a1c522a2d4628a282334fc05ad33f3fdcb043724310be152785"
	I1010 17:35:33.707510   25604 cri.go:89] found id: "071e94df1917ca297f7279600d3626289e55027384ba88081d3c6d75e7c1e418"
	I1010 17:35:33.707515   25604 cri.go:89] found id: "c9f157c8634805afa056c3c518497da9987b920da8bf0ac132118ed4e4ef8ea9"
	I1010 17:35:33.707518   25604 cri.go:89] found id: "6031890f647ecd1229bdaba7d90fb473ac9a8831d40666fdbc09bd914ca1987a"
	I1010 17:35:33.707520   25604 cri.go:89] found id: "7325b7e01b3666ecc0095ccec9564f71a65648e8cf9ce1d9e1915c7d1eaa574a"
	I1010 17:35:33.707522   25604 cri.go:89] found id: "19825e2ee8b346bd47414fd8e5247ef52e5b2e32f3eb7196eef394c88fd2275f"
	I1010 17:35:33.707525   25604 cri.go:89] found id: "22fc52febdf0cdaac8c2a5aad2960659c8a0115782e452ba76c5328f526e478c"
	I1010 17:35:33.707527   25604 cri.go:89] found id: "b770cbeea4ac5736e7c6c1c1e37f4cf430284066d47389ba72462e3d59a6fc36"
	I1010 17:35:33.707545   25604 cri.go:89] found id: "a6f2b6c587bcccac9182b5ea634d295f575138b032fc36b618e6fd522dd3434a"
	I1010 17:35:33.707555   25604 cri.go:89] found id: "0e80700c177774e91775e01b322ba4b7c3ad23f691a7e8ef083285a034138f33"
	I1010 17:35:33.707562   25604 cri.go:89] found id: "8cab3f92e9e88acd6ccdda17457d84c2208a2db36f61d1769c76a89b89d5c06c"
	I1010 17:35:33.707566   25604 cri.go:89] found id: "8cb0bc1946c2ed4da67ec55c8c6c99b35b9087ba2260de09913804f39b37e9aa"
	I1010 17:35:33.707570   25604 cri.go:89] found id: "a664c4cd86a07ae3da31b7161b1ffcb861502990a087baf33bb177718c331505"
	I1010 17:35:33.707574   25604 cri.go:89] found id: "4f4668380d0085c1200a82058cc3e69994ce54d202cd46003f4aeb1592745336"
	I1010 17:35:33.707578   25604 cri.go:89] found id: "c1f6da858e936ea82a29d1bd82704a1271d21a8c3ef131087c8a1ffc041f909d"
	I1010 17:35:33.707582   25604 cri.go:89] found id: "03911015ab5c009ca66dea098febf11b6b587cc8665f0cf85bf894a7d24caf04"
	I1010 17:35:33.707586   25604 cri.go:89] found id: "8643869dd690c538be7e9ae88ed91a5133d80d777f7f05864080e0071de6ce07"
	I1010 17:35:33.707590   25604 cri.go:89] found id: "426cb7351d8b7ffa8dc04159ce227020bab5b313130d3d7ea54e381c5e1ff403"
	I1010 17:35:33.707594   25604 cri.go:89] found id: ""
	I1010 17:35:33.707637   25604 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 17:35:33.722037   25604 out.go:203] 
	W1010 17:35:33.722956   25604 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:35:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:35:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1010 17:35:33.722975   25604 out.go:285] * 
	* 
	W1010 17:35:33.726463   25604 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 17:35:33.727440   25604 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-594989 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-594989 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-594989 addons disable ingress --alsologtostderr -v=1: exit status 11 (224.596429ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 17:35:33.770732   25679 out.go:360] Setting OutFile to fd 1 ...
	I1010 17:35:33.770999   25679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:35:33.771008   25679 out.go:374] Setting ErrFile to fd 2...
	I1010 17:35:33.771013   25679 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:35:33.771194   25679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 17:35:33.771435   25679 mustload.go:65] Loading cluster: addons-594989
	I1010 17:35:33.771732   25679 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:35:33.771746   25679 addons.go:606] checking whether the cluster is paused
	I1010 17:35:33.771822   25679 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:35:33.771834   25679 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:35:33.772208   25679 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:35:33.789765   25679 ssh_runner.go:195] Run: systemctl --version
	I1010 17:35:33.789818   25679 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:35:33.808513   25679 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:35:33.904449   25679 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 17:35:33.904510   25679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 17:35:33.932871   25679 cri.go:89] found id: "b1a6b256b8c98af3e77b04a05b2a65c8cd4c474c6710af81414409c5d6ea1a6d"
	I1010 17:35:33.932898   25679 cri.go:89] found id: "5e95cdad968221cb4aa3e3f82adc548bb3d5b365829bff504b6b9205dce0e7fd"
	I1010 17:35:33.932902   25679 cri.go:89] found id: "d699fc1ff60deb831fc7ac36084436101b2b6f7f34bc49bb7395f67303eddd87"
	I1010 17:35:33.932905   25679 cri.go:89] found id: "f9378118d907d056ea9eef46a4fc61abb3f92e4f6c26d4b923dfdde2abf957d2"
	I1010 17:35:33.932908   25679 cri.go:89] found id: "678a2f9830be76f38c2b341c8e72305078d85125c2e54db3f06c6813bd0a0d9a"
	I1010 17:35:33.932915   25679 cri.go:89] found id: "ad42a3a9aced05fb033bff70ddc9ef71d5b204c2f932be8648ba95c3746e71c2"
	I1010 17:35:33.932917   25679 cri.go:89] found id: "b55d72508fae28446486a54619cff08b4afb5c385f6a5e4eac89e3cfebc91592"
	I1010 17:35:33.932920   25679 cri.go:89] found id: "901121a197604a1c522a2d4628a282334fc05ad33f3fdcb043724310be152785"
	I1010 17:35:33.932922   25679 cri.go:89] found id: "071e94df1917ca297f7279600d3626289e55027384ba88081d3c6d75e7c1e418"
	I1010 17:35:33.932931   25679 cri.go:89] found id: "c9f157c8634805afa056c3c518497da9987b920da8bf0ac132118ed4e4ef8ea9"
	I1010 17:35:33.932933   25679 cri.go:89] found id: "6031890f647ecd1229bdaba7d90fb473ac9a8831d40666fdbc09bd914ca1987a"
	I1010 17:35:33.932936   25679 cri.go:89] found id: "7325b7e01b3666ecc0095ccec9564f71a65648e8cf9ce1d9e1915c7d1eaa574a"
	I1010 17:35:33.932938   25679 cri.go:89] found id: "19825e2ee8b346bd47414fd8e5247ef52e5b2e32f3eb7196eef394c88fd2275f"
	I1010 17:35:33.932941   25679 cri.go:89] found id: "22fc52febdf0cdaac8c2a5aad2960659c8a0115782e452ba76c5328f526e478c"
	I1010 17:35:33.932944   25679 cri.go:89] found id: "b770cbeea4ac5736e7c6c1c1e37f4cf430284066d47389ba72462e3d59a6fc36"
	I1010 17:35:33.932952   25679 cri.go:89] found id: "a6f2b6c587bcccac9182b5ea634d295f575138b032fc36b618e6fd522dd3434a"
	I1010 17:35:33.932956   25679 cri.go:89] found id: "0e80700c177774e91775e01b322ba4b7c3ad23f691a7e8ef083285a034138f33"
	I1010 17:35:33.932961   25679 cri.go:89] found id: "8cab3f92e9e88acd6ccdda17457d84c2208a2db36f61d1769c76a89b89d5c06c"
	I1010 17:35:33.932963   25679 cri.go:89] found id: "8cb0bc1946c2ed4da67ec55c8c6c99b35b9087ba2260de09913804f39b37e9aa"
	I1010 17:35:33.932970   25679 cri.go:89] found id: "a664c4cd86a07ae3da31b7161b1ffcb861502990a087baf33bb177718c331505"
	I1010 17:35:33.932972   25679 cri.go:89] found id: "4f4668380d0085c1200a82058cc3e69994ce54d202cd46003f4aeb1592745336"
	I1010 17:35:33.932975   25679 cri.go:89] found id: "c1f6da858e936ea82a29d1bd82704a1271d21a8c3ef131087c8a1ffc041f909d"
	I1010 17:35:33.932977   25679 cri.go:89] found id: "03911015ab5c009ca66dea098febf11b6b587cc8665f0cf85bf894a7d24caf04"
	I1010 17:35:33.932979   25679 cri.go:89] found id: "8643869dd690c538be7e9ae88ed91a5133d80d777f7f05864080e0071de6ce07"
	I1010 17:35:33.932982   25679 cri.go:89] found id: "426cb7351d8b7ffa8dc04159ce227020bab5b313130d3d7ea54e381c5e1ff403"
	I1010 17:35:33.932984   25679 cri.go:89] found id: ""
	I1010 17:35:33.933029   25679 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 17:35:33.946836   25679 out.go:203] 
	W1010 17:35:33.947855   25679 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:35:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:35:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1010 17:35:33.947875   25679 out.go:285] * 
	* 
	W1010 17:35:33.951164   25679 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 17:35:33.952169   25679 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-594989 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (148.86s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.23s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-cntr6" [8273ba33-d719-4c51-a66f-facfd063ac10] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003254329s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-594989 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-594989 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (225.483137ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 17:33:15.231688   22503 out.go:360] Setting OutFile to fd 1 ...
	I1010 17:33:15.232023   22503 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:33:15.232035   22503 out.go:374] Setting ErrFile to fd 2...
	I1010 17:33:15.232039   22503 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:33:15.232273   22503 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 17:33:15.232580   22503 mustload.go:65] Loading cluster: addons-594989
	I1010 17:33:15.232937   22503 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:33:15.232957   22503 addons.go:606] checking whether the cluster is paused
	I1010 17:33:15.233046   22503 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:33:15.233077   22503 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:33:15.233487   22503 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:33:15.251228   22503 ssh_runner.go:195] Run: systemctl --version
	I1010 17:33:15.251283   22503 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:33:15.268932   22503 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:33:15.363871   22503 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 17:33:15.363954   22503 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 17:33:15.392604   22503 cri.go:89] found id: "5e95cdad968221cb4aa3e3f82adc548bb3d5b365829bff504b6b9205dce0e7fd"
	I1010 17:33:15.392624   22503 cri.go:89] found id: "d699fc1ff60deb831fc7ac36084436101b2b6f7f34bc49bb7395f67303eddd87"
	I1010 17:33:15.392630   22503 cri.go:89] found id: "f9378118d907d056ea9eef46a4fc61abb3f92e4f6c26d4b923dfdde2abf957d2"
	I1010 17:33:15.392635   22503 cri.go:89] found id: "678a2f9830be76f38c2b341c8e72305078d85125c2e54db3f06c6813bd0a0d9a"
	I1010 17:33:15.392640   22503 cri.go:89] found id: "ad42a3a9aced05fb033bff70ddc9ef71d5b204c2f932be8648ba95c3746e71c2"
	I1010 17:33:15.392645   22503 cri.go:89] found id: "b55d72508fae28446486a54619cff08b4afb5c385f6a5e4eac89e3cfebc91592"
	I1010 17:33:15.392650   22503 cri.go:89] found id: "901121a197604a1c522a2d4628a282334fc05ad33f3fdcb043724310be152785"
	I1010 17:33:15.392654   22503 cri.go:89] found id: "071e94df1917ca297f7279600d3626289e55027384ba88081d3c6d75e7c1e418"
	I1010 17:33:15.392658   22503 cri.go:89] found id: "c9f157c8634805afa056c3c518497da9987b920da8bf0ac132118ed4e4ef8ea9"
	I1010 17:33:15.392673   22503 cri.go:89] found id: "6031890f647ecd1229bdaba7d90fb473ac9a8831d40666fdbc09bd914ca1987a"
	I1010 17:33:15.392681   22503 cri.go:89] found id: "7325b7e01b3666ecc0095ccec9564f71a65648e8cf9ce1d9e1915c7d1eaa574a"
	I1010 17:33:15.392684   22503 cri.go:89] found id: "19825e2ee8b346bd47414fd8e5247ef52e5b2e32f3eb7196eef394c88fd2275f"
	I1010 17:33:15.392686   22503 cri.go:89] found id: "22fc52febdf0cdaac8c2a5aad2960659c8a0115782e452ba76c5328f526e478c"
	I1010 17:33:15.392689   22503 cri.go:89] found id: "b770cbeea4ac5736e7c6c1c1e37f4cf430284066d47389ba72462e3d59a6fc36"
	I1010 17:33:15.392693   22503 cri.go:89] found id: "a6f2b6c587bcccac9182b5ea634d295f575138b032fc36b618e6fd522dd3434a"
	I1010 17:33:15.392703   22503 cri.go:89] found id: "0e80700c177774e91775e01b322ba4b7c3ad23f691a7e8ef083285a034138f33"
	I1010 17:33:15.392710   22503 cri.go:89] found id: "8cab3f92e9e88acd6ccdda17457d84c2208a2db36f61d1769c76a89b89d5c06c"
	I1010 17:33:15.392719   22503 cri.go:89] found id: "8cb0bc1946c2ed4da67ec55c8c6c99b35b9087ba2260de09913804f39b37e9aa"
	I1010 17:33:15.392723   22503 cri.go:89] found id: "a664c4cd86a07ae3da31b7161b1ffcb861502990a087baf33bb177718c331505"
	I1010 17:33:15.392731   22503 cri.go:89] found id: "4f4668380d0085c1200a82058cc3e69994ce54d202cd46003f4aeb1592745336"
	I1010 17:33:15.392735   22503 cri.go:89] found id: "c1f6da858e936ea82a29d1bd82704a1271d21a8c3ef131087c8a1ffc041f909d"
	I1010 17:33:15.392743   22503 cri.go:89] found id: "03911015ab5c009ca66dea098febf11b6b587cc8665f0cf85bf894a7d24caf04"
	I1010 17:33:15.392747   22503 cri.go:89] found id: "8643869dd690c538be7e9ae88ed91a5133d80d777f7f05864080e0071de6ce07"
	I1010 17:33:15.392751   22503 cri.go:89] found id: "426cb7351d8b7ffa8dc04159ce227020bab5b313130d3d7ea54e381c5e1ff403"
	I1010 17:33:15.392758   22503 cri.go:89] found id: ""
	I1010 17:33:15.392800   22503 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 17:33:15.407172   22503 out.go:203] 
	W1010 17:33:15.408173   22503 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:33:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:33:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1010 17:33:15.408204   22503 out.go:285] * 
	* 
	W1010 17:33:15.411629   22503 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 17:33:15.412704   22503 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-594989 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.23s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.016422ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-wccx5" [5b88af97-0885-40b1-acb2-8d58361a5fd0] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002507554s
addons_test.go:463: (dbg) Run:  kubectl --context addons-594989 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-594989 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-594989 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (225.752939ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 17:32:59.673903   20393 out.go:360] Setting OutFile to fd 1 ...
	I1010 17:32:59.674186   20393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:32:59.674196   20393 out.go:374] Setting ErrFile to fd 2...
	I1010 17:32:59.674202   20393 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:32:59.674395   20393 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 17:32:59.674684   20393 mustload.go:65] Loading cluster: addons-594989
	I1010 17:32:59.675013   20393 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:32:59.675030   20393 addons.go:606] checking whether the cluster is paused
	I1010 17:32:59.675140   20393 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:32:59.675156   20393 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:32:59.675520   20393 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:32:59.692900   20393 ssh_runner.go:195] Run: systemctl --version
	I1010 17:32:59.692952   20393 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:32:59.709804   20393 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:32:59.805557   20393 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 17:32:59.805632   20393 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 17:32:59.835910   20393 cri.go:89] found id: "5e95cdad968221cb4aa3e3f82adc548bb3d5b365829bff504b6b9205dce0e7fd"
	I1010 17:32:59.835928   20393 cri.go:89] found id: "d699fc1ff60deb831fc7ac36084436101b2b6f7f34bc49bb7395f67303eddd87"
	I1010 17:32:59.835939   20393 cri.go:89] found id: "f9378118d907d056ea9eef46a4fc61abb3f92e4f6c26d4b923dfdde2abf957d2"
	I1010 17:32:59.835944   20393 cri.go:89] found id: "678a2f9830be76f38c2b341c8e72305078d85125c2e54db3f06c6813bd0a0d9a"
	I1010 17:32:59.835949   20393 cri.go:89] found id: "ad42a3a9aced05fb033bff70ddc9ef71d5b204c2f932be8648ba95c3746e71c2"
	I1010 17:32:59.835954   20393 cri.go:89] found id: "b55d72508fae28446486a54619cff08b4afb5c385f6a5e4eac89e3cfebc91592"
	I1010 17:32:59.835958   20393 cri.go:89] found id: "901121a197604a1c522a2d4628a282334fc05ad33f3fdcb043724310be152785"
	I1010 17:32:59.835962   20393 cri.go:89] found id: "071e94df1917ca297f7279600d3626289e55027384ba88081d3c6d75e7c1e418"
	I1010 17:32:59.835969   20393 cri.go:89] found id: "c9f157c8634805afa056c3c518497da9987b920da8bf0ac132118ed4e4ef8ea9"
	I1010 17:32:59.835976   20393 cri.go:89] found id: "6031890f647ecd1229bdaba7d90fb473ac9a8831d40666fdbc09bd914ca1987a"
	I1010 17:32:59.835980   20393 cri.go:89] found id: "7325b7e01b3666ecc0095ccec9564f71a65648e8cf9ce1d9e1915c7d1eaa574a"
	I1010 17:32:59.835984   20393 cri.go:89] found id: "19825e2ee8b346bd47414fd8e5247ef52e5b2e32f3eb7196eef394c88fd2275f"
	I1010 17:32:59.835991   20393 cri.go:89] found id: "22fc52febdf0cdaac8c2a5aad2960659c8a0115782e452ba76c5328f526e478c"
	I1010 17:32:59.835995   20393 cri.go:89] found id: "b770cbeea4ac5736e7c6c1c1e37f4cf430284066d47389ba72462e3d59a6fc36"
	I1010 17:32:59.836007   20393 cri.go:89] found id: "a6f2b6c587bcccac9182b5ea634d295f575138b032fc36b618e6fd522dd3434a"
	I1010 17:32:59.836011   20393 cri.go:89] found id: "0e80700c177774e91775e01b322ba4b7c3ad23f691a7e8ef083285a034138f33"
	I1010 17:32:59.836014   20393 cri.go:89] found id: "8cab3f92e9e88acd6ccdda17457d84c2208a2db36f61d1769c76a89b89d5c06c"
	I1010 17:32:59.836017   20393 cri.go:89] found id: "8cb0bc1946c2ed4da67ec55c8c6c99b35b9087ba2260de09913804f39b37e9aa"
	I1010 17:32:59.836020   20393 cri.go:89] found id: "a664c4cd86a07ae3da31b7161b1ffcb861502990a087baf33bb177718c331505"
	I1010 17:32:59.836022   20393 cri.go:89] found id: "4f4668380d0085c1200a82058cc3e69994ce54d202cd46003f4aeb1592745336"
	I1010 17:32:59.836025   20393 cri.go:89] found id: "c1f6da858e936ea82a29d1bd82704a1271d21a8c3ef131087c8a1ffc041f909d"
	I1010 17:32:59.836027   20393 cri.go:89] found id: "03911015ab5c009ca66dea098febf11b6b587cc8665f0cf85bf894a7d24caf04"
	I1010 17:32:59.836029   20393 cri.go:89] found id: "8643869dd690c538be7e9ae88ed91a5133d80d777f7f05864080e0071de6ce07"
	I1010 17:32:59.836031   20393 cri.go:89] found id: "426cb7351d8b7ffa8dc04159ce227020bab5b313130d3d7ea54e381c5e1ff403"
	I1010 17:32:59.836034   20393 cri.go:89] found id: ""
	I1010 17:32:59.836086   20393 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 17:32:59.850344   20393 out.go:203] 
	W1010 17:32:59.851478   20393 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:32:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:32:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1010 17:32:59.851496   20393 out.go:285] * 
	* 
	W1010 17:32:59.854528   20393 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 17:32:59.855627   20393 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-594989 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.29s)

                                                
                                    
x
+
TestAddons/parallel/CSI (33.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1010 17:32:57.022317    9354 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1010 17:32:57.025847    9354 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1010 17:32:57.025884    9354 kapi.go:107] duration metric: took 3.581996ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.595719ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-594989 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-594989 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-594989 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-594989 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [0d8d6454-0ee7-41df-a2ef-b9852ac4adc8] Pending
helpers_test.go:352: "task-pv-pod" [0d8d6454-0ee7-41df-a2ef-b9852ac4adc8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [0d8d6454-0ee7-41df-a2ef-b9852ac4adc8] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.003147373s
addons_test.go:572: (dbg) Run:  kubectl --context addons-594989 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-594989 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-594989 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-594989 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-594989 delete pod task-pv-pod: (1.334256733s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-594989 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-594989 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-594989 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-594989 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-594989 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-594989 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-594989 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-594989 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-594989 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-594989 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-594989 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-594989 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-594989 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-594989 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [2792174a-3c8a-4d5a-b0b0-614559fd4064] Pending
helpers_test.go:352: "task-pv-pod-restore" [2792174a-3c8a-4d5a-b0b0-614559fd4064] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [2792174a-3c8a-4d5a-b0b0-614559fd4064] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003745065s
addons_test.go:614: (dbg) Run:  kubectl --context addons-594989 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-594989 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-594989 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-594989 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-594989 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (226.124086ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 17:33:30.225567   23223 out.go:360] Setting OutFile to fd 1 ...
	I1010 17:33:30.225822   23223 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:33:30.225831   23223 out.go:374] Setting ErrFile to fd 2...
	I1010 17:33:30.225834   23223 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:33:30.226086   23223 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 17:33:30.226430   23223 mustload.go:65] Loading cluster: addons-594989
	I1010 17:33:30.226869   23223 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:33:30.226892   23223 addons.go:606] checking whether the cluster is paused
	I1010 17:33:30.227009   23223 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:33:30.227023   23223 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:33:30.227428   23223 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:33:30.245629   23223 ssh_runner.go:195] Run: systemctl --version
	I1010 17:33:30.245674   23223 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:33:30.262724   23223 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:33:30.358596   23223 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 17:33:30.358659   23223 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 17:33:30.387643   23223 cri.go:89] found id: "5e95cdad968221cb4aa3e3f82adc548bb3d5b365829bff504b6b9205dce0e7fd"
	I1010 17:33:30.387665   23223 cri.go:89] found id: "d699fc1ff60deb831fc7ac36084436101b2b6f7f34bc49bb7395f67303eddd87"
	I1010 17:33:30.387670   23223 cri.go:89] found id: "f9378118d907d056ea9eef46a4fc61abb3f92e4f6c26d4b923dfdde2abf957d2"
	I1010 17:33:30.387674   23223 cri.go:89] found id: "678a2f9830be76f38c2b341c8e72305078d85125c2e54db3f06c6813bd0a0d9a"
	I1010 17:33:30.387676   23223 cri.go:89] found id: "ad42a3a9aced05fb033bff70ddc9ef71d5b204c2f932be8648ba95c3746e71c2"
	I1010 17:33:30.387679   23223 cri.go:89] found id: "b55d72508fae28446486a54619cff08b4afb5c385f6a5e4eac89e3cfebc91592"
	I1010 17:33:30.387682   23223 cri.go:89] found id: "901121a197604a1c522a2d4628a282334fc05ad33f3fdcb043724310be152785"
	I1010 17:33:30.387684   23223 cri.go:89] found id: "071e94df1917ca297f7279600d3626289e55027384ba88081d3c6d75e7c1e418"
	I1010 17:33:30.387687   23223 cri.go:89] found id: "c9f157c8634805afa056c3c518497da9987b920da8bf0ac132118ed4e4ef8ea9"
	I1010 17:33:30.387693   23223 cri.go:89] found id: "6031890f647ecd1229bdaba7d90fb473ac9a8831d40666fdbc09bd914ca1987a"
	I1010 17:33:30.387700   23223 cri.go:89] found id: "7325b7e01b3666ecc0095ccec9564f71a65648e8cf9ce1d9e1915c7d1eaa574a"
	I1010 17:33:30.387719   23223 cri.go:89] found id: "19825e2ee8b346bd47414fd8e5247ef52e5b2e32f3eb7196eef394c88fd2275f"
	I1010 17:33:30.387727   23223 cri.go:89] found id: "22fc52febdf0cdaac8c2a5aad2960659c8a0115782e452ba76c5328f526e478c"
	I1010 17:33:30.387731   23223 cri.go:89] found id: "b770cbeea4ac5736e7c6c1c1e37f4cf430284066d47389ba72462e3d59a6fc36"
	I1010 17:33:30.387735   23223 cri.go:89] found id: "a6f2b6c587bcccac9182b5ea634d295f575138b032fc36b618e6fd522dd3434a"
	I1010 17:33:30.387744   23223 cri.go:89] found id: "0e80700c177774e91775e01b322ba4b7c3ad23f691a7e8ef083285a034138f33"
	I1010 17:33:30.387747   23223 cri.go:89] found id: "8cab3f92e9e88acd6ccdda17457d84c2208a2db36f61d1769c76a89b89d5c06c"
	I1010 17:33:30.387750   23223 cri.go:89] found id: "8cb0bc1946c2ed4da67ec55c8c6c99b35b9087ba2260de09913804f39b37e9aa"
	I1010 17:33:30.387752   23223 cri.go:89] found id: "a664c4cd86a07ae3da31b7161b1ffcb861502990a087baf33bb177718c331505"
	I1010 17:33:30.387755   23223 cri.go:89] found id: "4f4668380d0085c1200a82058cc3e69994ce54d202cd46003f4aeb1592745336"
	I1010 17:33:30.387757   23223 cri.go:89] found id: "c1f6da858e936ea82a29d1bd82704a1271d21a8c3ef131087c8a1ffc041f909d"
	I1010 17:33:30.387760   23223 cri.go:89] found id: "03911015ab5c009ca66dea098febf11b6b587cc8665f0cf85bf894a7d24caf04"
	I1010 17:33:30.387762   23223 cri.go:89] found id: "8643869dd690c538be7e9ae88ed91a5133d80d777f7f05864080e0071de6ce07"
	I1010 17:33:30.387765   23223 cri.go:89] found id: "426cb7351d8b7ffa8dc04159ce227020bab5b313130d3d7ea54e381c5e1ff403"
	I1010 17:33:30.387767   23223 cri.go:89] found id: ""
	I1010 17:33:30.387802   23223 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 17:33:30.402023   23223 out.go:203] 
	W1010 17:33:30.403124   23223 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:33:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:33:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1010 17:33:30.403151   23223 out.go:285] * 
	* 
	W1010 17:33:30.406183   23223 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 17:33:30.407166   23223 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-594989 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-594989 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-594989 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (226.984227ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 17:33:30.452119   23285 out.go:360] Setting OutFile to fd 1 ...
	I1010 17:33:30.452501   23285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:33:30.452513   23285 out.go:374] Setting ErrFile to fd 2...
	I1010 17:33:30.452520   23285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:33:30.452839   23285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 17:33:30.453182   23285 mustload.go:65] Loading cluster: addons-594989
	I1010 17:33:30.453539   23285 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:33:30.453556   23285 addons.go:606] checking whether the cluster is paused
	I1010 17:33:30.453635   23285 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:33:30.453647   23285 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:33:30.454000   23285 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:33:30.470992   23285 ssh_runner.go:195] Run: systemctl --version
	I1010 17:33:30.471045   23285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:33:30.487778   23285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:33:30.583563   23285 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 17:33:30.583658   23285 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 17:33:30.612594   23285 cri.go:89] found id: "5e95cdad968221cb4aa3e3f82adc548bb3d5b365829bff504b6b9205dce0e7fd"
	I1010 17:33:30.612613   23285 cri.go:89] found id: "d699fc1ff60deb831fc7ac36084436101b2b6f7f34bc49bb7395f67303eddd87"
	I1010 17:33:30.612616   23285 cri.go:89] found id: "f9378118d907d056ea9eef46a4fc61abb3f92e4f6c26d4b923dfdde2abf957d2"
	I1010 17:33:30.612620   23285 cri.go:89] found id: "678a2f9830be76f38c2b341c8e72305078d85125c2e54db3f06c6813bd0a0d9a"
	I1010 17:33:30.612623   23285 cri.go:89] found id: "ad42a3a9aced05fb033bff70ddc9ef71d5b204c2f932be8648ba95c3746e71c2"
	I1010 17:33:30.612626   23285 cri.go:89] found id: "b55d72508fae28446486a54619cff08b4afb5c385f6a5e4eac89e3cfebc91592"
	I1010 17:33:30.612629   23285 cri.go:89] found id: "901121a197604a1c522a2d4628a282334fc05ad33f3fdcb043724310be152785"
	I1010 17:33:30.612631   23285 cri.go:89] found id: "071e94df1917ca297f7279600d3626289e55027384ba88081d3c6d75e7c1e418"
	I1010 17:33:30.612633   23285 cri.go:89] found id: "c9f157c8634805afa056c3c518497da9987b920da8bf0ac132118ed4e4ef8ea9"
	I1010 17:33:30.612638   23285 cri.go:89] found id: "6031890f647ecd1229bdaba7d90fb473ac9a8831d40666fdbc09bd914ca1987a"
	I1010 17:33:30.612640   23285 cri.go:89] found id: "7325b7e01b3666ecc0095ccec9564f71a65648e8cf9ce1d9e1915c7d1eaa574a"
	I1010 17:33:30.612643   23285 cri.go:89] found id: "19825e2ee8b346bd47414fd8e5247ef52e5b2e32f3eb7196eef394c88fd2275f"
	I1010 17:33:30.612645   23285 cri.go:89] found id: "22fc52febdf0cdaac8c2a5aad2960659c8a0115782e452ba76c5328f526e478c"
	I1010 17:33:30.612647   23285 cri.go:89] found id: "b770cbeea4ac5736e7c6c1c1e37f4cf430284066d47389ba72462e3d59a6fc36"
	I1010 17:33:30.612650   23285 cri.go:89] found id: "a6f2b6c587bcccac9182b5ea634d295f575138b032fc36b618e6fd522dd3434a"
	I1010 17:33:30.612659   23285 cri.go:89] found id: "0e80700c177774e91775e01b322ba4b7c3ad23f691a7e8ef083285a034138f33"
	I1010 17:33:30.612664   23285 cri.go:89] found id: "8cab3f92e9e88acd6ccdda17457d84c2208a2db36f61d1769c76a89b89d5c06c"
	I1010 17:33:30.612668   23285 cri.go:89] found id: "8cb0bc1946c2ed4da67ec55c8c6c99b35b9087ba2260de09913804f39b37e9aa"
	I1010 17:33:30.612671   23285 cri.go:89] found id: "a664c4cd86a07ae3da31b7161b1ffcb861502990a087baf33bb177718c331505"
	I1010 17:33:30.612673   23285 cri.go:89] found id: "4f4668380d0085c1200a82058cc3e69994ce54d202cd46003f4aeb1592745336"
	I1010 17:33:30.612675   23285 cri.go:89] found id: "c1f6da858e936ea82a29d1bd82704a1271d21a8c3ef131087c8a1ffc041f909d"
	I1010 17:33:30.612678   23285 cri.go:89] found id: "03911015ab5c009ca66dea098febf11b6b587cc8665f0cf85bf894a7d24caf04"
	I1010 17:33:30.612680   23285 cri.go:89] found id: "8643869dd690c538be7e9ae88ed91a5133d80d777f7f05864080e0071de6ce07"
	I1010 17:33:30.612683   23285 cri.go:89] found id: "426cb7351d8b7ffa8dc04159ce227020bab5b313130d3d7ea54e381c5e1ff403"
	I1010 17:33:30.612685   23285 cri.go:89] found id: ""
	I1010 17:33:30.612757   23285 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 17:33:30.627861   23285 out.go:203] 
	W1010 17:33:30.628939   23285 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:33:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:33:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1010 17:33:30.628963   23285 out.go:285] * 
	* 
	W1010 17:33:30.633635   23285 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 17:33:30.634647   23285 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-594989 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (33.62s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-594989 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-594989 --alsologtostderr -v=1: exit status 11 (225.011706ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 17:32:54.613882   19492 out.go:360] Setting OutFile to fd 1 ...
	I1010 17:32:54.614201   19492 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:32:54.614212   19492 out.go:374] Setting ErrFile to fd 2...
	I1010 17:32:54.614225   19492 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:32:54.614424   19492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 17:32:54.614723   19492 mustload.go:65] Loading cluster: addons-594989
	I1010 17:32:54.615115   19492 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:32:54.615136   19492 addons.go:606] checking whether the cluster is paused
	I1010 17:32:54.615263   19492 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:32:54.615280   19492 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:32:54.615702   19492 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:32:54.632177   19492 ssh_runner.go:195] Run: systemctl --version
	I1010 17:32:54.632245   19492 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:32:54.648229   19492 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:32:54.742644   19492 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 17:32:54.742723   19492 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 17:32:54.772768   19492 cri.go:89] found id: "5e95cdad968221cb4aa3e3f82adc548bb3d5b365829bff504b6b9205dce0e7fd"
	I1010 17:32:54.772786   19492 cri.go:89] found id: "d699fc1ff60deb831fc7ac36084436101b2b6f7f34bc49bb7395f67303eddd87"
	I1010 17:32:54.772790   19492 cri.go:89] found id: "f9378118d907d056ea9eef46a4fc61abb3f92e4f6c26d4b923dfdde2abf957d2"
	I1010 17:32:54.772793   19492 cri.go:89] found id: "678a2f9830be76f38c2b341c8e72305078d85125c2e54db3f06c6813bd0a0d9a"
	I1010 17:32:54.772796   19492 cri.go:89] found id: "ad42a3a9aced05fb033bff70ddc9ef71d5b204c2f932be8648ba95c3746e71c2"
	I1010 17:32:54.772801   19492 cri.go:89] found id: "b55d72508fae28446486a54619cff08b4afb5c385f6a5e4eac89e3cfebc91592"
	I1010 17:32:54.772803   19492 cri.go:89] found id: "901121a197604a1c522a2d4628a282334fc05ad33f3fdcb043724310be152785"
	I1010 17:32:54.772806   19492 cri.go:89] found id: "071e94df1917ca297f7279600d3626289e55027384ba88081d3c6d75e7c1e418"
	I1010 17:32:54.772808   19492 cri.go:89] found id: "c9f157c8634805afa056c3c518497da9987b920da8bf0ac132118ed4e4ef8ea9"
	I1010 17:32:54.772813   19492 cri.go:89] found id: "6031890f647ecd1229bdaba7d90fb473ac9a8831d40666fdbc09bd914ca1987a"
	I1010 17:32:54.772816   19492 cri.go:89] found id: "7325b7e01b3666ecc0095ccec9564f71a65648e8cf9ce1d9e1915c7d1eaa574a"
	I1010 17:32:54.772818   19492 cri.go:89] found id: "19825e2ee8b346bd47414fd8e5247ef52e5b2e32f3eb7196eef394c88fd2275f"
	I1010 17:32:54.772821   19492 cri.go:89] found id: "22fc52febdf0cdaac8c2a5aad2960659c8a0115782e452ba76c5328f526e478c"
	I1010 17:32:54.772824   19492 cri.go:89] found id: "b770cbeea4ac5736e7c6c1c1e37f4cf430284066d47389ba72462e3d59a6fc36"
	I1010 17:32:54.772826   19492 cri.go:89] found id: "a6f2b6c587bcccac9182b5ea634d295f575138b032fc36b618e6fd522dd3434a"
	I1010 17:32:54.772833   19492 cri.go:89] found id: "0e80700c177774e91775e01b322ba4b7c3ad23f691a7e8ef083285a034138f33"
	I1010 17:32:54.772836   19492 cri.go:89] found id: "8cab3f92e9e88acd6ccdda17457d84c2208a2db36f61d1769c76a89b89d5c06c"
	I1010 17:32:54.772840   19492 cri.go:89] found id: "8cb0bc1946c2ed4da67ec55c8c6c99b35b9087ba2260de09913804f39b37e9aa"
	I1010 17:32:54.772842   19492 cri.go:89] found id: "a664c4cd86a07ae3da31b7161b1ffcb861502990a087baf33bb177718c331505"
	I1010 17:32:54.772845   19492 cri.go:89] found id: "4f4668380d0085c1200a82058cc3e69994ce54d202cd46003f4aeb1592745336"
	I1010 17:32:54.772848   19492 cri.go:89] found id: "c1f6da858e936ea82a29d1bd82704a1271d21a8c3ef131087c8a1ffc041f909d"
	I1010 17:32:54.772850   19492 cri.go:89] found id: "03911015ab5c009ca66dea098febf11b6b587cc8665f0cf85bf894a7d24caf04"
	I1010 17:32:54.772852   19492 cri.go:89] found id: "8643869dd690c538be7e9ae88ed91a5133d80d777f7f05864080e0071de6ce07"
	I1010 17:32:54.772855   19492 cri.go:89] found id: "426cb7351d8b7ffa8dc04159ce227020bab5b313130d3d7ea54e381c5e1ff403"
	I1010 17:32:54.772857   19492 cri.go:89] found id: ""
	I1010 17:32:54.772893   19492 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 17:32:54.787197   19492 out.go:203] 
	W1010 17:32:54.788240   19492 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:32:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:32:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1010 17:32:54.788265   19492 out.go:285] * 
	* 
	W1010 17:32:54.791225   19492 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 17:32:54.792547   19492 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-594989 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-594989
helpers_test.go:243: (dbg) docker inspect addons-594989:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9fa2ef611c9849fd3426794362cf847d0d6f7c23a20ae6d9800988ad87ecb135",
	        "Created": "2025-10-10T17:30:31.036645528Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 11495,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-10T17:30:31.068002446Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:84da1fc78d37190122f56c520913b0bfc454516bc5fdbdc209e2a5258afce8c3",
	        "ResolvConfPath": "/var/lib/docker/containers/9fa2ef611c9849fd3426794362cf847d0d6f7c23a20ae6d9800988ad87ecb135/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9fa2ef611c9849fd3426794362cf847d0d6f7c23a20ae6d9800988ad87ecb135/hostname",
	        "HostsPath": "/var/lib/docker/containers/9fa2ef611c9849fd3426794362cf847d0d6f7c23a20ae6d9800988ad87ecb135/hosts",
	        "LogPath": "/var/lib/docker/containers/9fa2ef611c9849fd3426794362cf847d0d6f7c23a20ae6d9800988ad87ecb135/9fa2ef611c9849fd3426794362cf847d0d6f7c23a20ae6d9800988ad87ecb135-json.log",
	        "Name": "/addons-594989",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-594989:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-594989",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9fa2ef611c9849fd3426794362cf847d0d6f7c23a20ae6d9800988ad87ecb135",
	                "LowerDir": "/var/lib/docker/overlay2/6327399dea96096ab55cbb18fc07221ce0de561a801c7e62e54cae577730c751-init/diff:/var/lib/docker/overlay2/9995a0af7efc4d83e8e62526a6cf13ffc5df3bab5cee59077c863040f7e3e58d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6327399dea96096ab55cbb18fc07221ce0de561a801c7e62e54cae577730c751/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6327399dea96096ab55cbb18fc07221ce0de561a801c7e62e54cae577730c751/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6327399dea96096ab55cbb18fc07221ce0de561a801c7e62e54cae577730c751/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-594989",
	                "Source": "/var/lib/docker/volumes/addons-594989/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-594989",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-594989",
	                "name.minikube.sigs.k8s.io": "addons-594989",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5feccdb82fa32fd4f5329a3672e4a95b15c3a398fc9ff55ab3b441482dac6882",
	            "SandboxKey": "/var/run/docker/netns/5feccdb82fa3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-594989": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:cc:14:87:ea:e1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e5afbb060625fdf00eb96e69bc6bb1936796d1d3115b5c6a81a4bfc12076dd40",
	                    "EndpointID": "037cbe2d2a9496d52ebc03cf599dcb81c89561985495e224bd952445db89c960",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-594989",
	                        "9fa2ef611c98"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-594989 -n addons-594989
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-594989 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-594989 logs -n 25: (1.093362033s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-493383 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-493383   │ jenkins │ v1.37.0 │ 10 Oct 25 17:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 10 Oct 25 17:29 UTC │ 10 Oct 25 17:29 UTC │
	│ delete  │ -p download-only-493383                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-493383   │ jenkins │ v1.37.0 │ 10 Oct 25 17:29 UTC │ 10 Oct 25 17:29 UTC │
	│ start   │ -o=json --download-only -p download-only-963459 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-963459   │ jenkins │ v1.37.0 │ 10 Oct 25 17:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 10 Oct 25 17:30 UTC │ 10 Oct 25 17:30 UTC │
	│ delete  │ -p download-only-963459                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-963459   │ jenkins │ v1.37.0 │ 10 Oct 25 17:30 UTC │ 10 Oct 25 17:30 UTC │
	│ delete  │ -p download-only-493383                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-493383   │ jenkins │ v1.37.0 │ 10 Oct 25 17:30 UTC │ 10 Oct 25 17:30 UTC │
	│ delete  │ -p download-only-963459                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-963459   │ jenkins │ v1.37.0 │ 10 Oct 25 17:30 UTC │ 10 Oct 25 17:30 UTC │
	│ start   │ --download-only -p download-docker-494179 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-494179 │ jenkins │ v1.37.0 │ 10 Oct 25 17:30 UTC │                     │
	│ delete  │ -p download-docker-494179                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-494179 │ jenkins │ v1.37.0 │ 10 Oct 25 17:30 UTC │ 10 Oct 25 17:30 UTC │
	│ start   │ --download-only -p binary-mirror-717710 --alsologtostderr --binary-mirror http://127.0.0.1:39877 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-717710   │ jenkins │ v1.37.0 │ 10 Oct 25 17:30 UTC │                     │
	│ delete  │ -p binary-mirror-717710                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-717710   │ jenkins │ v1.37.0 │ 10 Oct 25 17:30 UTC │ 10 Oct 25 17:30 UTC │
	│ addons  │ enable dashboard -p addons-594989                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-594989          │ jenkins │ v1.37.0 │ 10 Oct 25 17:30 UTC │                     │
	│ addons  │ disable dashboard -p addons-594989                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-594989          │ jenkins │ v1.37.0 │ 10 Oct 25 17:30 UTC │                     │
	│ start   │ -p addons-594989 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-594989          │ jenkins │ v1.37.0 │ 10 Oct 25 17:30 UTC │ 10 Oct 25 17:32 UTC │
	│ addons  │ addons-594989 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-594989          │ jenkins │ v1.37.0 │ 10 Oct 25 17:32 UTC │                     │
	│ addons  │ addons-594989 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-594989          │ jenkins │ v1.37.0 │ 10 Oct 25 17:32 UTC │                     │
	│ addons  │ enable headlamp -p addons-594989 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-594989          │ jenkins │ v1.37.0 │ 10 Oct 25 17:32 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/10 17:30:06
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 17:30:06.361014   10838 out.go:360] Setting OutFile to fd 1 ...
	I1010 17:30:06.361342   10838 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:30:06.361352   10838 out.go:374] Setting ErrFile to fd 2...
	I1010 17:30:06.361355   10838 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:30:06.361576   10838 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 17:30:06.362173   10838 out.go:368] Setting JSON to false
	I1010 17:30:06.363030   10838 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":746,"bootTime":1760116660,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 17:30:06.363137   10838 start.go:141] virtualization: kvm guest
	I1010 17:30:06.365117   10838 out.go:179] * [addons-594989] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1010 17:30:06.366254   10838 notify.go:220] Checking for updates...
	I1010 17:30:06.366294   10838 out.go:179]   - MINIKUBE_LOCATION=21724
	I1010 17:30:06.367395   10838 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 17:30:06.368875   10838 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 17:30:06.369995   10838 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-5815/.minikube
	I1010 17:30:06.371029   10838 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 17:30:06.372047   10838 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 17:30:06.373208   10838 driver.go:421] Setting default libvirt URI to qemu:///system
	I1010 17:30:06.396002   10838 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1010 17:30:06.396126   10838 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 17:30:06.454959   10838 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-10 17:30:06.44499643 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 17:30:06.455093   10838 docker.go:318] overlay module found
	I1010 17:30:06.456955   10838 out.go:179] * Using the docker driver based on user configuration
	I1010 17:30:06.458129   10838 start.go:305] selected driver: docker
	I1010 17:30:06.458144   10838 start.go:925] validating driver "docker" against <nil>
	I1010 17:30:06.458155   10838 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 17:30:06.458713   10838 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 17:30:06.514418   10838 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-10 17:30:06.50512395 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 17:30:06.514567   10838 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1010 17:30:06.514772   10838 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 17:30:06.516601   10838 out.go:179] * Using Docker driver with root privileges
	I1010 17:30:06.517737   10838 cni.go:84] Creating CNI manager for ""
	I1010 17:30:06.517795   10838 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 17:30:06.517808   10838 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1010 17:30:06.517871   10838 start.go:349] cluster config:
	{Name:addons-594989 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-594989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1010 17:30:06.519220   10838 out.go:179] * Starting "addons-594989" primary control-plane node in "addons-594989" cluster
	I1010 17:30:06.520323   10838 cache.go:123] Beginning downloading kic base image for docker with crio
	I1010 17:30:06.521435   10838 out.go:179] * Pulling base image v0.0.48-1760103811-21724 ...
	I1010 17:30:06.522464   10838 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 17:30:06.522514   10838 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1010 17:30:06.522523   10838 cache.go:58] Caching tarball of preloaded images
	I1010 17:30:06.522579   10838 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon
	I1010 17:30:06.522609   10838 preload.go:233] Found /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 17:30:06.522617   10838 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1010 17:30:06.522976   10838 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/config.json ...
	I1010 17:30:06.522998   10838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/config.json: {Name:mk3dbc5a9832b9046e3fa50e98f8fc65a9bc2515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:30:06.540080   10838 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 to local cache
	I1010 17:30:06.540206   10838 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local cache directory
	I1010 17:30:06.540224   10838 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local cache directory, skipping pull
	I1010 17:30:06.540230   10838 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 exists in cache, skipping pull
	I1010 17:30:06.540237   10838 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 as a tarball
	I1010 17:30:06.540243   10838 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 from local cache
	I1010 17:30:21.903904   10838 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 from cached tarball
	I1010 17:30:21.903939   10838 cache.go:232] Successfully downloaded all kic artifacts
	I1010 17:30:21.903979   10838 start.go:360] acquireMachinesLock for addons-594989: {Name:mk3be95cc494884c6edea2e4e0b6f8ab4aa5f686 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 17:30:21.904098   10838 start.go:364] duration metric: took 98.011µs to acquireMachinesLock for "addons-594989"
	I1010 17:30:21.904123   10838 start.go:93] Provisioning new machine with config: &{Name:addons-594989 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-594989 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 17:30:21.904185   10838 start.go:125] createHost starting for "" (driver="docker")
	I1010 17:30:21.905621   10838 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1010 17:30:21.905804   10838 start.go:159] libmachine.API.Create for "addons-594989" (driver="docker")
	I1010 17:30:21.905829   10838 client.go:168] LocalClient.Create starting
	I1010 17:30:21.905927   10838 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem
	I1010 17:30:22.098456   10838 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem
	I1010 17:30:22.664950   10838 cli_runner.go:164] Run: docker network inspect addons-594989 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1010 17:30:22.681400   10838 cli_runner.go:211] docker network inspect addons-594989 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1010 17:30:22.681490   10838 network_create.go:284] running [docker network inspect addons-594989] to gather additional debugging logs...
	I1010 17:30:22.681513   10838 cli_runner.go:164] Run: docker network inspect addons-594989
	W1010 17:30:22.696705   10838 cli_runner.go:211] docker network inspect addons-594989 returned with exit code 1
	I1010 17:30:22.696749   10838 network_create.go:287] error running [docker network inspect addons-594989]: docker network inspect addons-594989: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-594989 not found
	I1010 17:30:22.696769   10838 network_create.go:289] output of [docker network inspect addons-594989]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-594989 not found
	
	** /stderr **
	I1010 17:30:22.696888   10838 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 17:30:22.713531   10838 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d78860}
	I1010 17:30:22.713579   10838 network_create.go:124] attempt to create docker network addons-594989 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1010 17:30:22.713625   10838 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-594989 addons-594989
	I1010 17:30:22.767588   10838 network_create.go:108] docker network addons-594989 192.168.49.0/24 created
	I1010 17:30:22.767616   10838 kic.go:121] calculated static IP "192.168.49.2" for the "addons-594989" container
	I1010 17:30:22.767671   10838 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1010 17:30:22.782992   10838 cli_runner.go:164] Run: docker volume create addons-594989 --label name.minikube.sigs.k8s.io=addons-594989 --label created_by.minikube.sigs.k8s.io=true
	I1010 17:30:22.799155   10838 oci.go:103] Successfully created a docker volume addons-594989
	I1010 17:30:22.799221   10838 cli_runner.go:164] Run: docker run --rm --name addons-594989-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-594989 --entrypoint /usr/bin/test -v addons-594989:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 -d /var/lib
	I1010 17:30:26.636008   10838 cli_runner.go:217] Completed: docker run --rm --name addons-594989-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-594989 --entrypoint /usr/bin/test -v addons-594989:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 -d /var/lib: (3.836748235s)
	I1010 17:30:26.636034   10838 oci.go:107] Successfully prepared a docker volume addons-594989
	I1010 17:30:26.636045   10838 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 17:30:26.636109   10838 kic.go:194] Starting extracting preloaded images to volume ...
	I1010 17:30:26.636155   10838 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-594989:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1010 17:30:30.967731   10838 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-594989:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.331540703s)
	I1010 17:30:30.967757   10838 kic.go:203] duration metric: took 4.331646769s to extract preloaded images to volume ...
	W1010 17:30:30.967837   10838 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1010 17:30:30.967868   10838 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1010 17:30:30.967903   10838 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1010 17:30:31.021751   10838 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-594989 --name addons-594989 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-594989 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-594989 --network addons-594989 --ip 192.168.49.2 --volume addons-594989:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6
	I1010 17:30:31.302357   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Running}}
	I1010 17:30:31.320699   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:31.338440   10838 cli_runner.go:164] Run: docker exec addons-594989 stat /var/lib/dpkg/alternatives/iptables
	I1010 17:30:31.386035   10838 oci.go:144] the created container "addons-594989" has a running status.
	I1010 17:30:31.386096   10838 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa...
	I1010 17:30:31.880950   10838 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1010 17:30:31.905495   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:31.924177   10838 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1010 17:30:31.924201   10838 kic_runner.go:114] Args: [docker exec --privileged addons-594989 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1010 17:30:31.961364   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:31.978848   10838 machine.go:93] provisionDockerMachine start ...
	I1010 17:30:31.978930   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:31.996247   10838 main.go:141] libmachine: Using SSH client type: native
	I1010 17:30:31.996467   10838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1010 17:30:31.996478   10838 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 17:30:32.126861   10838 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-594989
	
	I1010 17:30:32.126888   10838 ubuntu.go:182] provisioning hostname "addons-594989"
	I1010 17:30:32.126950   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:32.143409   10838 main.go:141] libmachine: Using SSH client type: native
	I1010 17:30:32.143599   10838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1010 17:30:32.143613   10838 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-594989 && echo "addons-594989" | sudo tee /etc/hostname
	I1010 17:30:32.283957   10838 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-594989
	
	I1010 17:30:32.284037   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:32.301087   10838 main.go:141] libmachine: Using SSH client type: native
	I1010 17:30:32.301289   10838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1010 17:30:32.301305   10838 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-594989' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-594989/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-594989' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 17:30:32.431567   10838 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 17:30:32.431599   10838 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-5815/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-5815/.minikube}
	I1010 17:30:32.431633   10838 ubuntu.go:190] setting up certificates
	I1010 17:30:32.431645   10838 provision.go:84] configureAuth start
	I1010 17:30:32.431707   10838 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-594989
	I1010 17:30:32.449703   10838 provision.go:143] copyHostCerts
	I1010 17:30:32.449782   10838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem (1675 bytes)
	I1010 17:30:32.449918   10838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem (1082 bytes)
	I1010 17:30:32.450011   10838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem (1123 bytes)
	I1010 17:30:32.450107   10838 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem org=jenkins.addons-594989 san=[127.0.0.1 192.168.49.2 addons-594989 localhost minikube]
	I1010 17:30:32.585040   10838 provision.go:177] copyRemoteCerts
	I1010 17:30:32.585111   10838 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 17:30:32.585142   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:32.601612   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:32.697574   10838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1010 17:30:32.717499   10838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1010 17:30:32.736498   10838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 17:30:32.755394   10838 provision.go:87] duration metric: took 323.735274ms to configureAuth
	I1010 17:30:32.755414   10838 ubuntu.go:206] setting minikube options for container-runtime
	I1010 17:30:32.755575   10838 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:30:32.755662   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:32.772719   10838 main.go:141] libmachine: Using SSH client type: native
	I1010 17:30:32.772922   10838 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1010 17:30:32.772939   10838 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 17:30:33.043359   10838 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 17:30:33.043378   10838 machine.go:96] duration metric: took 1.064511043s to provisionDockerMachine
	I1010 17:30:33.043388   10838 client.go:171] duration metric: took 11.137551987s to LocalClient.Create
	I1010 17:30:33.043404   10838 start.go:167] duration metric: took 11.137598801s to libmachine.API.Create "addons-594989"
	I1010 17:30:33.043413   10838 start.go:293] postStartSetup for "addons-594989" (driver="docker")
	I1010 17:30:33.043425   10838 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 17:30:33.043479   10838 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 17:30:33.043532   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:33.061237   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:33.158323   10838 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 17:30:33.161594   10838 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1010 17:30:33.161619   10838 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1010 17:30:33.161629   10838 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/addons for local assets ...
	I1010 17:30:33.161684   10838 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/files for local assets ...
	I1010 17:30:33.161707   10838 start.go:296] duration metric: took 118.287948ms for postStartSetup
	I1010 17:30:33.161960   10838 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-594989
	I1010 17:30:33.178986   10838 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/config.json ...
	I1010 17:30:33.179282   10838 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1010 17:30:33.179330   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:33.197090   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:33.288847   10838 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1010 17:30:33.292976   10838 start.go:128] duration metric: took 11.388779655s to createHost
	I1010 17:30:33.292994   10838 start.go:83] releasing machines lock for "addons-594989", held for 11.38888394s
	I1010 17:30:33.293065   10838 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-594989
	I1010 17:30:33.310630   10838 ssh_runner.go:195] Run: cat /version.json
	I1010 17:30:33.310681   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:33.310713   10838 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 17:30:33.310765   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:33.328441   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:33.330683   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:33.420133   10838 ssh_runner.go:195] Run: systemctl --version
	I1010 17:30:33.475613   10838 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 17:30:33.512267   10838 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 17:30:33.516907   10838 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 17:30:33.516975   10838 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 17:30:33.545910   10838 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 17:30:33.545934   10838 start.go:495] detecting cgroup driver to use...
	I1010 17:30:33.545968   10838 detect.go:190] detected "systemd" cgroup driver on host os
	I1010 17:30:33.546028   10838 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 17:30:33.563138   10838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 17:30:33.576306   10838 docker.go:218] disabling cri-docker service (if available) ...
	I1010 17:30:33.576364   10838 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 17:30:33.593319   10838 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 17:30:33.611251   10838 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 17:30:33.691837   10838 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 17:30:33.781458   10838 docker.go:234] disabling docker service ...
	I1010 17:30:33.781544   10838 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 17:30:33.800098   10838 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 17:30:33.813380   10838 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 17:30:33.899213   10838 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 17:30:33.981359   10838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 17:30:33.994260   10838 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 17:30:34.009329   10838 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1010 17:30:34.009377   10838 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 17:30:34.020133   10838 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1010 17:30:34.020214   10838 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 17:30:34.029853   10838 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 17:30:34.039331   10838 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 17:30:34.048759   10838 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 17:30:34.057508   10838 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 17:30:34.067102   10838 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 17:30:34.082131   10838 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 17:30:34.091405   10838 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 17:30:34.099616   10838 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1010 17:30:34.099673   10838 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1010 17:30:34.111830   10838 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 17:30:34.120353   10838 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 17:30:34.201747   10838 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 17:30:34.337513   10838 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 17:30:34.337584   10838 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 17:30:34.341465   10838 start.go:563] Will wait 60s for crictl version
	I1010 17:30:34.341512   10838 ssh_runner.go:195] Run: which crictl
	I1010 17:30:34.344934   10838 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1010 17:30:34.370191   10838 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1010 17:30:34.370335   10838 ssh_runner.go:195] Run: crio --version
	I1010 17:30:34.397163   10838 ssh_runner.go:195] Run: crio --version
	I1010 17:30:34.425005   10838 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1010 17:30:34.426025   10838 cli_runner.go:164] Run: docker network inspect addons-594989 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 17:30:34.443404   10838 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1010 17:30:34.447406   10838 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 17:30:34.457961   10838 kubeadm.go:883] updating cluster {Name:addons-594989 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-594989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 17:30:34.458104   10838 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 17:30:34.458172   10838 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 17:30:34.489367   10838 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 17:30:34.489388   10838 crio.go:433] Images already preloaded, skipping extraction
	I1010 17:30:34.489444   10838 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 17:30:34.515632   10838 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 17:30:34.515653   10838 cache_images.go:85] Images are preloaded, skipping loading
	I1010 17:30:34.515659   10838 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1010 17:30:34.515744   10838 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-594989 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-594989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 17:30:34.515818   10838 ssh_runner.go:195] Run: crio config
	I1010 17:30:34.560118   10838 cni.go:84] Creating CNI manager for ""
	I1010 17:30:34.560140   10838 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 17:30:34.560168   10838 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1010 17:30:34.560196   10838 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-594989 NodeName:addons-594989 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 17:30:34.560342   10838 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-594989"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 17:30:34.560405   10838 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1010 17:30:34.569025   10838 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 17:30:34.569097   10838 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 17:30:34.577263   10838 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1010 17:30:34.590731   10838 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 17:30:34.607183   10838 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1010 17:30:34.620714   10838 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1010 17:30:34.624223   10838 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 17:30:34.634393   10838 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 17:30:34.714616   10838 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 17:30:34.739452   10838 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989 for IP: 192.168.49.2
	I1010 17:30:34.739470   10838 certs.go:195] generating shared ca certs ...
	I1010 17:30:34.739484   10838 certs.go:227] acquiring lock for ca certs: {Name:mkd2ebf34e0d6ec3a7809bed8325fdc7fe2fcc31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:30:34.739609   10838 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key
	I1010 17:30:35.139130   10838 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt ...
	I1010 17:30:35.139158   10838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt: {Name:mk650aa7f4ff32ad966d5e8b39e5e2b32aca7c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:30:35.139352   10838 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key ...
	I1010 17:30:35.139367   10838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key: {Name:mkdb6a8b6dbc479523f0cc85aae637cf977fc8fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:30:35.139474   10838 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key
	I1010 17:30:35.389507   10838 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.crt ...
	I1010 17:30:35.389535   10838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.crt: {Name:mkde245fd3fbe3a5dace53fe07e5b3036cbfe44d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:30:35.389721   10838 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key ...
	I1010 17:30:35.389735   10838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key: {Name:mkfd3baeb73564eb3c648c6dd88a16a028f3b4b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:30:35.389841   10838 certs.go:257] generating profile certs ...
	I1010 17:30:35.389904   10838 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.key
	I1010 17:30:35.389924   10838 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.crt with IP's: []
	I1010 17:30:35.739400   10838 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.crt ...
	I1010 17:30:35.739430   10838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.crt: {Name:mka71586df5688b96c522c53e41e713d2b473b25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:30:35.739635   10838 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.key ...
	I1010 17:30:35.739649   10838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.key: {Name:mka2d321442c6650503dfec5163f4835012b868d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:30:35.739755   10838 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/apiserver.key.b6f25ad6
	I1010 17:30:35.739776   10838 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/apiserver.crt.b6f25ad6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1010 17:30:35.993166   10838 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/apiserver.crt.b6f25ad6 ...
	I1010 17:30:35.993193   10838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/apiserver.crt.b6f25ad6: {Name:mk71a098e94d4700b6684874f0d747b97e6a32bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:30:35.993386   10838 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/apiserver.key.b6f25ad6 ...
	I1010 17:30:35.993405   10838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/apiserver.key.b6f25ad6: {Name:mk36505aa863981e7d7fa7fe93adca4604c45146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:30:35.993516   10838 certs.go:382] copying /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/apiserver.crt.b6f25ad6 -> /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/apiserver.crt
	I1010 17:30:35.993610   10838 certs.go:386] copying /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/apiserver.key.b6f25ad6 -> /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/apiserver.key
	I1010 17:30:35.993665   10838 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/proxy-client.key
	I1010 17:30:35.993684   10838 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/proxy-client.crt with IP's: []
	I1010 17:30:36.455456   10838 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/proxy-client.crt ...
	I1010 17:30:36.455489   10838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/proxy-client.crt: {Name:mk617bd2c82bf6e1ed8206255f493cfc594258af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:30:36.455667   10838 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/proxy-client.key ...
	I1010 17:30:36.455678   10838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/proxy-client.key: {Name:mkc40a9546349e2ce8cf3e7efa3c131c37c4b0e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:30:36.455839   10838 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem (1675 bytes)
	I1010 17:30:36.455873   10838 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem (1082 bytes)
	I1010 17:30:36.455894   10838 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem (1123 bytes)
	I1010 17:30:36.455913   10838 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem (1675 bytes)
	I1010 17:30:36.456511   10838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 17:30:36.476751   10838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 17:30:36.496310   10838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 17:30:36.514988   10838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1010 17:30:36.533841   10838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1010 17:30:36.552425   10838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 17:30:36.570981   10838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 17:30:36.589519   10838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1010 17:30:36.607848   10838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 17:30:36.627932   10838 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 17:30:36.641761   10838 ssh_runner.go:195] Run: openssl version
	I1010 17:30:36.647705   10838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 17:30:36.658884   10838 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 17:30:36.662498   10838 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I1010 17:30:36.662553   10838 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 17:30:36.695938   10838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 17:30:36.705388   10838 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 17:30:36.709009   10838 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1010 17:30:36.709073   10838 kubeadm.go:400] StartCluster: {Name:addons-594989 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-594989 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 17:30:36.709159   10838 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 17:30:36.709230   10838 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 17:30:36.736008   10838 cri.go:89] found id: ""
	I1010 17:30:36.736086   10838 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 17:30:36.744667   10838 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 17:30:36.753014   10838 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1010 17:30:36.753095   10838 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 17:30:36.761163   10838 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 17:30:36.761183   10838 kubeadm.go:157] found existing configuration files:
	
	I1010 17:30:36.761220   10838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 17:30:36.769267   10838 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 17:30:36.769315   10838 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 17:30:36.777999   10838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 17:30:36.786347   10838 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 17:30:36.786403   10838 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 17:30:36.794664   10838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 17:30:36.802906   10838 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 17:30:36.802960   10838 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 17:30:36.810638   10838 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 17:30:36.818492   10838 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 17:30:36.818538   10838 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 17:30:36.826261   10838 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1010 17:30:36.862682   10838 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1010 17:30:36.862742   10838 kubeadm.go:318] [preflight] Running pre-flight checks
	I1010 17:30:36.882354   10838 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1010 17:30:36.882414   10838 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1010 17:30:36.882442   10838 kubeadm.go:318] OS: Linux
	I1010 17:30:36.882501   10838 kubeadm.go:318] CGROUPS_CPU: enabled
	I1010 17:30:36.882579   10838 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1010 17:30:36.882667   10838 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1010 17:30:36.882743   10838 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1010 17:30:36.882814   10838 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1010 17:30:36.882881   10838 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1010 17:30:36.882950   10838 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1010 17:30:36.883029   10838 kubeadm.go:318] CGROUPS_IO: enabled
	I1010 17:30:36.934833   10838 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 17:30:36.934959   10838 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 17:30:36.935109   10838 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1010 17:30:36.941920   10838 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 17:30:36.943895   10838 out.go:252]   - Generating certificates and keys ...
	I1010 17:30:36.943990   10838 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1010 17:30:36.944115   10838 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1010 17:30:37.237708   10838 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1010 17:30:37.571576   10838 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1010 17:30:37.872370   10838 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1010 17:30:38.236368   10838 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1010 17:30:38.285619   10838 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1010 17:30:38.285805   10838 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-594989 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1010 17:30:38.358044   10838 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1010 17:30:38.358248   10838 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-594989 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1010 17:30:38.795818   10838 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1010 17:30:39.043953   10838 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1010 17:30:39.269139   10838 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1010 17:30:39.269249   10838 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 17:30:39.585961   10838 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 17:30:39.816850   10838 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1010 17:30:39.942130   10838 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 17:30:40.382870   10838 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 17:30:40.449962   10838 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 17:30:40.451000   10838 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 17:30:40.455038   10838 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 17:30:40.456513   10838 out.go:252]   - Booting up control plane ...
	I1010 17:30:40.456597   10838 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 17:30:40.456681   10838 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 17:30:40.457249   10838 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 17:30:40.470937   10838 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 17:30:40.471115   10838 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1010 17:30:40.477516   10838 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1010 17:30:40.477816   10838 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 17:30:40.477871   10838 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1010 17:30:40.573934   10838 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 17:30:40.574138   10838 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 17:30:41.075746   10838 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.949615ms
	I1010 17:30:41.079730   10838 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1010 17:30:41.079839   10838 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1010 17:30:41.079949   10838 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1010 17:30:41.080087   10838 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1010 17:30:42.214908   10838 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.135122439s
	I1010 17:30:44.000622   10838 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.920896089s
	I1010 17:30:45.581431   10838 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.501641891s
	I1010 17:30:45.591384   10838 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 17:30:45.599794   10838 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 17:30:45.607635   10838 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 17:30:45.607939   10838 kubeadm.go:318] [mark-control-plane] Marking the node addons-594989 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 17:30:45.614517   10838 kubeadm.go:318] [bootstrap-token] Using token: g8m8ob.vbiavqs0zz8j6p83
	I1010 17:30:45.615663   10838 out.go:252]   - Configuring RBAC rules ...
	I1010 17:30:45.615821   10838 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 17:30:45.618526   10838 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 17:30:45.622660   10838 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 17:30:45.624718   10838 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 17:30:45.627306   10838 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 17:30:45.629262   10838 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 17:30:45.986784   10838 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 17:30:46.399265   10838 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1010 17:30:46.985935   10838 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1010 17:30:46.986793   10838 kubeadm.go:318] 
	I1010 17:30:46.986900   10838 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1010 17:30:46.986918   10838 kubeadm.go:318] 
	I1010 17:30:46.986996   10838 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1010 17:30:46.987005   10838 kubeadm.go:318] 
	I1010 17:30:46.987040   10838 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1010 17:30:46.987136   10838 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 17:30:46.987220   10838 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 17:30:46.987238   10838 kubeadm.go:318] 
	I1010 17:30:46.987363   10838 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1010 17:30:46.987380   10838 kubeadm.go:318] 
	I1010 17:30:46.987450   10838 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 17:30:46.987457   10838 kubeadm.go:318] 
	I1010 17:30:46.987530   10838 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1010 17:30:46.987630   10838 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 17:30:46.987725   10838 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 17:30:46.987734   10838 kubeadm.go:318] 
	I1010 17:30:46.987838   10838 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 17:30:46.987938   10838 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1010 17:30:46.987948   10838 kubeadm.go:318] 
	I1010 17:30:46.988081   10838 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token g8m8ob.vbiavqs0zz8j6p83 \
	I1010 17:30:46.988231   10838 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:08dcb68c3233bd2646103f50182dc3a0cc6156f6b69cb66c341f613324bcc71f \
	I1010 17:30:46.988271   10838 kubeadm.go:318] 	--control-plane 
	I1010 17:30:46.988280   10838 kubeadm.go:318] 
	I1010 17:30:46.988388   10838 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1010 17:30:46.988395   10838 kubeadm.go:318] 
	I1010 17:30:46.988505   10838 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token g8m8ob.vbiavqs0zz8j6p83 \
	I1010 17:30:46.988632   10838 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:08dcb68c3233bd2646103f50182dc3a0cc6156f6b69cb66c341f613324bcc71f 
	I1010 17:30:46.989919   10838 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1010 17:30:46.990089   10838 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 17:30:46.990123   10838 cni.go:84] Creating CNI manager for ""
	I1010 17:30:46.990141   10838 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 17:30:46.992445   10838 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1010 17:30:46.993568   10838 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1010 17:30:46.997677   10838 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1010 17:30:46.997689   10838 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1010 17:30:47.011602   10838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1010 17:30:47.207486   10838 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 17:30:47.207566   10838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:30:47.207584   10838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-594989 minikube.k8s.io/updated_at=2025_10_10T17_30_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ad692bf4ab89f0e135b80e730ae25010479ecc46 minikube.k8s.io/name=addons-594989 minikube.k8s.io/primary=true
	I1010 17:30:47.217041   10838 ops.go:34] apiserver oom_adj: -16
	I1010 17:30:47.279542   10838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:30:47.780184   10838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:30:48.279973   10838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:30:48.780138   10838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:30:49.280342   10838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:30:49.779793   10838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:30:50.280472   10838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:30:50.779777   10838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:30:51.280149   10838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:30:51.780325   10838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:30:52.279623   10838 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 17:30:52.339889   10838 kubeadm.go:1113] duration metric: took 5.132383023s to wait for elevateKubeSystemPrivileges
	I1010 17:30:52.339928   10838 kubeadm.go:402] duration metric: took 15.630857792s to StartCluster
	I1010 17:30:52.339951   10838 settings.go:142] acquiring lock: {Name:mk32701f7c6313a55b8740f0862889585a36e8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:30:52.340081   10838 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 17:30:52.340471   10838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:30:52.340642   10838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1010 17:30:52.340654   10838 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 17:30:52.340730   10838 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1010 17:30:52.340849   10838 addons.go:69] Setting yakd=true in profile "addons-594989"
	I1010 17:30:52.340873   10838 addons.go:238] Setting addon yakd=true in "addons-594989"
	I1010 17:30:52.340873   10838 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-594989"
	I1010 17:30:52.340883   10838 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-594989"
	I1010 17:30:52.340889   10838 addons.go:69] Setting ingress=true in profile "addons-594989"
	I1010 17:30:52.340902   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.340906   10838 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-594989"
	I1010 17:30:52.340910   10838 addons.go:69] Setting registry=true in profile "addons-594989"
	I1010 17:30:52.340914   10838 addons.go:238] Setting addon ingress=true in "addons-594989"
	I1010 17:30:52.340912   10838 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:30:52.340923   10838 addons.go:238] Setting addon registry=true in "addons-594989"
	I1010 17:30:52.340940   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.340941   10838 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-594989"
	I1010 17:30:52.340950   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.340958   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.340972   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.340981   10838 addons.go:69] Setting default-storageclass=true in profile "addons-594989"
	I1010 17:30:52.340999   10838 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-594989"
	I1010 17:30:52.341028   10838 addons.go:69] Setting registry-creds=true in profile "addons-594989"
	I1010 17:30:52.341067   10838 addons.go:238] Setting addon registry-creds=true in "addons-594989"
	I1010 17:30:52.341090   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.341260   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.341418   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.341435   10838 addons.go:69] Setting volcano=true in profile "addons-594989"
	I1010 17:30:52.341446   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.341450   10838 addons.go:69] Setting gcp-auth=true in profile "addons-594989"
	I1010 17:30:52.341454   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.341457   10838 addons.go:69] Setting inspektor-gadget=true in profile "addons-594989"
	I1010 17:30:52.341461   10838 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-594989"
	I1010 17:30:52.341469   10838 mustload.go:65] Loading cluster: addons-594989
	I1010 17:30:52.341473   10838 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-594989"
	I1010 17:30:52.341478   10838 addons.go:69] Setting metrics-server=true in profile "addons-594989"
	I1010 17:30:52.341490   10838 addons.go:238] Setting addon metrics-server=true in "addons-594989"
	I1010 17:30:52.341502   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.341509   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.341617   10838 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:30:52.341725   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.341881   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.341902   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.341437   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.342840   10838 addons.go:69] Setting cloud-spanner=true in profile "addons-594989"
	I1010 17:30:52.342853   10838 addons.go:238] Setting addon cloud-spanner=true in "addons-594989"
	I1010 17:30:52.342877   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.342923   10838 addons.go:69] Setting volumesnapshots=true in profile "addons-594989"
	I1010 17:30:52.342940   10838 addons.go:238] Setting addon volumesnapshots=true in "addons-594989"
	I1010 17:30:52.342963   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.341470   10838 addons.go:238] Setting addon inspektor-gadget=true in "addons-594989"
	I1010 17:30:52.343011   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.343325   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.343430   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.343439   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.341448   10838 addons.go:69] Setting ingress-dns=true in profile "addons-594989"
	I1010 17:30:52.343844   10838 addons.go:238] Setting addon ingress-dns=true in "addons-594989"
	I1010 17:30:52.343877   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.344336   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.340889   10838 addons.go:69] Setting storage-provisioner=true in profile "addons-594989"
	I1010 17:30:52.344558   10838 addons.go:238] Setting addon storage-provisioner=true in "addons-594989"
	I1010 17:30:52.344584   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.341451   10838 addons.go:238] Setting addon volcano=true in "addons-594989"
	I1010 17:30:52.344642   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.342827   10838 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-594989"
	I1010 17:30:52.344787   10838 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-594989"
	I1010 17:30:52.344821   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.341437   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.345116   10838 out.go:179] * Verifying Kubernetes components...
	I1010 17:30:52.348075   10838 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 17:30:52.354579   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.355386   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.355856   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.386390   10838 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1010 17:30:52.388793   10838 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1010 17:30:52.388820   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1010 17:30:52.388881   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:52.402639   10838 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-594989"
	I1010 17:30:52.402671   10838 addons.go:238] Setting addon default-storageclass=true in "addons-594989"
	I1010 17:30:52.402686   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.402702   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.403177   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.403212   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:30:52.404011   10838 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1010 17:30:52.405150   10838 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1010 17:30:52.405172   10838 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1010 17:30:52.407545   10838 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1010 17:30:52.408259   10838 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1010 17:30:52.409724   10838 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1010 17:30:52.409774   10838 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1010 17:30:52.409835   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:52.410863   10838 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1010 17:30:52.410899   10838 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1010 17:30:52.410913   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1010 17:30:52.410959   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:52.415092   10838 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1010 17:30:52.418938   10838 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1010 17:30:52.426025   10838 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1010 17:30:52.426168   10838 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1010 17:30:52.426450   10838 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1010 17:30:52.427734   10838 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1010 17:30:52.427752   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1010 17:30:52.427811   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:52.429111   10838 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1010 17:30:52.430103   10838 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1010 17:30:52.430124   10838 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1010 17:30:52.430182   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:52.430242   10838 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1010 17:30:52.431424   10838 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1010 17:30:52.432592   10838 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1010 17:30:52.432610   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1010 17:30:52.432663   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:52.432922   10838 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1010 17:30:52.434175   10838 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1010 17:30:52.434186   10838 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1010 17:30:52.434191   10838 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1010 17:30:52.434304   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:52.436276   10838 out.go:179]   - Using image docker.io/registry:3.0.0
	I1010 17:30:52.437282   10838 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1010 17:30:52.437301   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1010 17:30:52.437351   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:52.439138   10838 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1010 17:30:52.440265   10838 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1010 17:30:52.440282   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1010 17:30:52.440334   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:52.444754   10838 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1010 17:30:52.446865   10838 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1010 17:30:52.446888   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1010 17:30:52.446949   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:52.447791   10838 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1010 17:30:52.449312   10838 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1010 17:30:52.449331   10838 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1010 17:30:52.449415   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:52.454974   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:30:52.461979   10838 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 17:30:52.462005   10838 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 17:30:52.462072   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:52.469549   10838 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1010 17:30:52.470720   10838 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1010 17:30:52.470748   10838 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1010 17:30:52.470813   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	W1010 17:30:52.471286   10838 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1010 17:30:52.479195   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:52.479659   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:52.480084   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:52.480177   10838 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 17:30:52.481651   10838 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 17:30:52.481669   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 17:30:52.482253   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:52.501847   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:52.502437   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:52.504015   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:52.514495   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:52.515374   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:52.516707   10838 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1010 17:30:52.517755   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:52.520122   10838 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1010 17:30:52.522119   10838 out.go:179]   - Using image docker.io/busybox:stable
	I1010 17:30:52.523412   10838 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1010 17:30:52.523759   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1010 17:30:52.523824   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:30:52.523662   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:52.523480   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:52.535080   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:52.545451   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:52.558089   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:52.560619   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:30:52.578490   10838 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 17:30:52.687687   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1010 17:30:52.689477   10838 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1010 17:30:52.689500   10838 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1010 17:30:52.692402   10838 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:30:52.692423   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1010 17:30:52.694113   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1010 17:30:52.701841   10838 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1010 17:30:52.701863   10838 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1010 17:30:52.706979   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1010 17:30:52.710882   10838 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1010 17:30:52.710900   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1010 17:30:52.711214   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1010 17:30:52.720246   10838 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1010 17:30:52.720270   10838 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1010 17:30:52.728725   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1010 17:30:52.732738   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1010 17:30:52.738398   10838 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1010 17:30:52.738482   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1010 17:30:52.740877   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1010 17:30:52.742855   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:30:52.744088   10838 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1010 17:30:52.744109   10838 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1010 17:30:52.748684   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 17:30:52.752580   10838 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1010 17:30:52.752608   10838 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1010 17:30:52.767320   10838 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1010 17:30:52.767344   10838 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1010 17:30:52.783095   10838 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1010 17:30:52.783125   10838 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1010 17:30:52.793840   10838 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1010 17:30:52.793870   10838 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1010 17:30:52.797623   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1010 17:30:52.801359   10838 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 17:30:52.801382   10838 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1010 17:30:52.801555   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 17:30:52.820711   10838 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1010 17:30:52.820751   10838 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1010 17:30:52.838956   10838 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1010 17:30:52.838982   10838 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1010 17:30:52.865707   10838 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1010 17:30:52.865734   10838 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1010 17:30:52.873664   10838 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1010 17:30:52.873759   10838 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1010 17:30:52.884425   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1010 17:30:52.897170   10838 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1010 17:30:52.897197   10838 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1010 17:30:52.938677   10838 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1010 17:30:52.938800   10838 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1010 17:30:52.944803   10838 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1010 17:30:52.944870   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1010 17:30:52.983822   10838 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1010 17:30:52.983846   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1010 17:30:53.000443   10838 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1010 17:30:53.001430   10838 node_ready.go:35] waiting up to 6m0s for node "addons-594989" to be "Ready" ...
	I1010 17:30:53.001741   10838 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1010 17:30:53.001755   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1010 17:30:53.022578   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1010 17:30:53.042897   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1010 17:30:53.088766   10838 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1010 17:30:53.088863   10838 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1010 17:30:53.162893   10838 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1010 17:30:53.162970   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1010 17:30:53.202950   10838 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1010 17:30:53.202972   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1010 17:30:53.232453   10838 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1010 17:30:53.232477   10838 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1010 17:30:53.318640   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1010 17:30:53.516379   10838 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-594989" context rescaled to 1 replicas
	I1010 17:30:53.854521   10838 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.147504363s)
	I1010 17:30:53.854573   10838 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.143295661s)
	I1010 17:30:53.854581   10838 addons.go:479] Verifying addon ingress=true in "addons-594989"
	I1010 17:30:53.854605   10838 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.125851669s)
	I1010 17:30:53.854664   10838 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.121904158s)
	I1010 17:30:53.854740   10838 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.113780873s)
	I1010 17:30:53.854909   10838 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.112035171s)
	W1010 17:30:53.854965   10838 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:30:53.854991   10838 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.106279991s)
	I1010 17:30:53.854992   10838 retry.go:31] will retry after 366.515538ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:30:53.855029   10838 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.053445056s)
	I1010 17:30:53.855125   10838 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.057465491s)
	I1010 17:30:53.855151   10838 addons.go:479] Verifying addon registry=true in "addons-594989"
	I1010 17:30:53.855392   10838 addons.go:479] Verifying addon metrics-server=true in "addons-594989"
	I1010 17:30:53.859251   10838 out.go:179] * Verifying ingress addon...
	I1010 17:30:53.859253   10838 out.go:179] * Verifying registry addon...
	I1010 17:30:53.859258   10838 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-594989 service yakd-dashboard -n yakd-dashboard
	
	I1010 17:30:53.861031   10838 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1010 17:30:53.861042   10838 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W1010 17:30:53.861614   10838 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1010 17:30:53.863569   10838 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1010 17:30:53.863688   10838 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1010 17:30:53.863705   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:30:54.222254   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:30:54.364651   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:30:54.364788   10838 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1010 17:30:54.364802   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:30:54.369727   10838 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.326734683s)
	W1010 17:30:54.369775   10838 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1010 17:30:54.369803   10838 retry.go:31] will retry after 352.099198ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1010 17:30:54.369981   10838 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.051290791s)
	I1010 17:30:54.370014   10838 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-594989"
	I1010 17:30:54.372517   10838 out.go:179] * Verifying csi-hostpath-driver addon...
	I1010 17:30:54.374478   10838 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1010 17:30:54.377671   10838 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1010 17:30:54.377691   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:30:54.722820   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1010 17:30:54.776473   10838 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:30:54.776507   10838 retry.go:31] will retry after 528.971148ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:30:54.864299   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:30:54.864437   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:30:54.877162   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:30:55.004421   10838 node_ready.go:57] node "addons-594989" has "Ready":"False" status (will retry)
	I1010 17:30:55.306241   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:30:55.364747   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:30:55.364875   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:30:55.377009   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:30:55.863333   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:30:55.863500   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:30:55.878080   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:30:56.364459   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:30:56.364610   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:30:56.376474   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:30:56.864231   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:30:56.864421   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:30:56.877031   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:30:57.194283   10838 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.47140917s)
	I1010 17:30:57.194342   10838 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.888063958s)
	W1010 17:30:57.194383   10838 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:30:57.194407   10838 retry.go:31] will retry after 693.738793ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:30:57.365132   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:30:57.365259   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:30:57.377209   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:30:57.504144   10838 node_ready.go:57] node "addons-594989" has "Ready":"False" status (will retry)
	I1010 17:30:57.863593   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:30:57.863703   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:30:57.876885   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:30:57.889011   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:30:58.363344   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:30:58.363403   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:30:58.377006   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:30:58.413710   10838 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:30:58.413738   10838 retry.go:31] will retry after 794.474194ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:30:58.864129   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:30:58.864316   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:30:58.876872   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:30:59.208781   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:30:59.364779   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:30:59.364863   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:30:59.377199   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:30:59.721887   10838 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:30:59.721916   10838 retry.go:31] will retry after 807.314696ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:30:59.863417   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:30:59.863563   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:30:59.877031   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:00.004491   10838 node_ready.go:57] node "addons-594989" has "Ready":"False" status (will retry)
	I1010 17:31:00.063367   10838 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1010 17:31:00.063439   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:31:00.080754   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:31:00.183900   10838 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1010 17:31:00.197922   10838 addons.go:238] Setting addon gcp-auth=true in "addons-594989"
	I1010 17:31:00.197987   10838 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:31:00.198363   10838 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:31:00.216450   10838 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1010 17:31:00.216498   10838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:31:00.233656   10838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:31:00.327665   10838 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1010 17:31:00.328901   10838 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1010 17:31:00.330157   10838 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1010 17:31:00.330174   10838 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1010 17:31:00.344192   10838 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1010 17:31:00.344211   10838 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1010 17:31:00.357424   10838 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1010 17:31:00.357441   10838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1010 17:31:00.364329   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:00.364485   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:00.371389   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1010 17:31:00.377577   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:00.530359   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:31:00.674223   10838 addons.go:479] Verifying addon gcp-auth=true in "addons-594989"
	I1010 17:31:00.675279   10838 out.go:179] * Verifying gcp-auth addon...
	I1010 17:31:00.676750   10838 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1010 17:31:00.678935   10838 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1010 17:31:00.678955   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:00.863473   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:00.863651   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:00.876438   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:01.068183   10838 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:31:01.068211   10838 retry.go:31] will retry after 1.293717217s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:31:01.179359   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:01.364335   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:01.364562   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:01.377288   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:01.679713   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:01.864346   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:01.864499   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:01.877935   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:02.180046   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:02.362320   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:31:02.364846   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:02.365041   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:02.377756   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:02.504445   10838 node_ready.go:57] node "addons-594989" has "Ready":"False" status (will retry)
	I1010 17:31:02.680630   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:02.864352   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:02.864530   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:02.878156   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:02.900507   10838 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:31:02.900533   10838 retry.go:31] will retry after 3.904162671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:31:03.179581   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:03.364541   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:03.364678   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:03.377160   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:03.679314   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:03.863901   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:03.864091   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:03.877469   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:04.179425   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:04.364144   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:04.364344   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:04.377925   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:04.504515   10838 node_ready.go:57] node "addons-594989" has "Ready":"False" status (will retry)
	I1010 17:31:04.680233   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:04.863775   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:04.863868   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:04.877031   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:05.180225   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:05.364252   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:05.364492   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:05.377760   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:05.680422   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:05.863745   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:05.863909   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:05.877114   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:06.180145   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:06.364636   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:06.364839   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:06.377005   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:06.680119   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:06.805297   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:31:06.864354   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:06.864577   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:06.879790   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:07.004311   10838 node_ready.go:57] node "addons-594989" has "Ready":"False" status (will retry)
	I1010 17:31:07.179715   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1010 17:31:07.340451   10838 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:31:07.340477   10838 retry.go:31] will retry after 2.748276662s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:31:07.364809   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:07.364990   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:07.377279   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:07.679577   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:07.864200   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:07.864412   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:07.877784   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:08.179925   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:08.364302   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:08.364505   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:08.377655   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:08.679774   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:08.864248   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:08.864459   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:08.877616   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:09.179736   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:09.364848   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:09.365019   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:09.377075   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:09.504644   10838 node_ready.go:57] node "addons-594989" has "Ready":"False" status (will retry)
	I1010 17:31:09.680432   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:09.864154   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:09.864347   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:09.877441   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:10.089264   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:31:10.180336   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:10.364130   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:10.364342   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:10.378175   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:10.622562   10838 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:31:10.622592   10838 retry.go:31] will retry after 5.588682028s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:31:10.679857   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:10.864483   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:10.864537   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:10.877468   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:11.179006   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:11.363939   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:11.363981   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:11.376980   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:11.679922   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:11.864635   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:11.864800   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:11.876618   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:12.004207   10838 node_ready.go:57] node "addons-594989" has "Ready":"False" status (will retry)
	I1010 17:31:12.179812   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:12.364448   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:12.364632   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:12.377644   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:12.679946   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:12.864469   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:12.864587   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:12.876766   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:13.179361   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:13.364251   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:13.364314   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:13.377222   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:13.679885   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:13.864402   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:13.864582   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:13.877520   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:14.179343   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:14.363832   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:14.363885   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:14.376920   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:14.504331   10838 node_ready.go:57] node "addons-594989" has "Ready":"False" status (will retry)
	I1010 17:31:14.679984   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:14.864314   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:14.864468   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:14.877867   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:15.179147   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:15.363905   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:15.363918   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:15.376809   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:15.679765   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:15.864383   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:15.864579   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:15.877533   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:16.179034   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:16.212153   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:31:16.364227   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:16.364321   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:16.377351   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:16.678997   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1010 17:31:16.728181   10838 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:31:16.728208   10838 retry.go:31] will retry after 5.366319964s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:31:16.863814   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:16.863959   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:16.877024   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:17.004460   10838 node_ready.go:57] node "addons-594989" has "Ready":"False" status (will retry)
	I1010 17:31:17.180133   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:17.363928   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:17.364140   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:17.377379   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:17.679231   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:17.863876   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:17.863896   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:17.876938   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:18.179713   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:18.364206   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:18.364352   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:18.377182   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:18.680290   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:18.863938   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:18.863976   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:18.876988   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:19.179661   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:19.364174   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:19.364251   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:19.376985   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:19.504381   10838 node_ready.go:57] node "addons-594989" has "Ready":"False" status (will retry)
	I1010 17:31:19.679789   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:19.864421   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:19.864544   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:19.877825   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:20.179540   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:20.364429   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:20.364524   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:20.377245   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:20.680091   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:20.863393   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:20.863574   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:20.877577   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:21.182300   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:21.363825   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:21.364068   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:21.376659   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:21.679439   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:21.864166   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:21.864214   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:21.877451   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:22.004659   10838 node_ready.go:57] node "addons-594989" has "Ready":"False" status (will retry)
	I1010 17:31:22.094857   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:31:22.179810   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:22.364615   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:22.364733   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:22.377362   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:22.612540   10838 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:31:22.612569   10838 retry.go:31] will retry after 9.056196227s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:31:22.679891   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:22.864503   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:22.864551   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:22.877479   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:23.178873   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:23.364646   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:23.364734   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:23.376458   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:23.679275   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:23.863820   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:23.864006   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:23.876728   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:24.179095   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:24.364688   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:24.364713   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:24.376673   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:24.504019   10838 node_ready.go:57] node "addons-594989" has "Ready":"False" status (will retry)
	I1010 17:31:24.679303   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:24.863840   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:24.863854   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:24.876734   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:25.179581   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:25.364229   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:25.364263   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:25.376812   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:25.680350   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:25.863789   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:25.863901   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:25.876739   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:26.179216   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:26.363743   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:26.363965   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:26.376610   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:26.679446   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:26.863896   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:26.864010   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:26.877259   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:27.004527   10838 node_ready.go:57] node "addons-594989" has "Ready":"False" status (will retry)
	I1010 17:31:27.180333   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:27.364129   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:27.364267   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:27.377594   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:27.679661   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:27.864364   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:27.864546   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:27.878093   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:28.179780   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:28.364548   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:28.364645   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:28.376759   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:28.679858   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:28.864225   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:28.864368   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:28.877424   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:29.179266   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:29.363683   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:29.363848   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:29.376955   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:29.504351   10838 node_ready.go:57] node "addons-594989" has "Ready":"False" status (will retry)
	I1010 17:31:29.679659   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:29.864172   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:29.864383   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:29.877602   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:30.179486   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:30.364112   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:30.364320   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:30.377312   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:30.680141   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:30.863748   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:30.863826   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:30.876711   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:31.179279   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:31.363919   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:31.364104   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:31.377309   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:31:31.504865   10838 node_ready.go:57] node "addons-594989" has "Ready":"False" status (will retry)
	I1010 17:31:31.669043   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:31:31.679802   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:31.864921   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:31.865281   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:31.877318   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:32.179514   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1010 17:31:32.197595   10838 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:31:32.197627   10838 retry.go:31] will retry after 31.007929001s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:31:32.364326   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:32.364523   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:32.377639   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:32.679748   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:32.864382   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:32.864526   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:32.877621   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:33.179576   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:33.364184   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:33.364416   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:33.377197   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:33.680337   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:33.864864   10838 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1010 17:31:33.864889   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:33.867102   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:33.879663   10838 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1010 17:31:33.879687   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:34.004743   10838 node_ready.go:49] node "addons-594989" is "Ready"
	I1010 17:31:34.004776   10838 node_ready.go:38] duration metric: took 41.003324473s for node "addons-594989" to be "Ready" ...
	I1010 17:31:34.004795   10838 api_server.go:52] waiting for apiserver process to appear ...
	I1010 17:31:34.004993   10838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 17:31:34.021879   10838 api_server.go:72] duration metric: took 41.68119327s to wait for apiserver process to appear ...
	I1010 17:31:34.021906   10838 api_server.go:88] waiting for apiserver healthz status ...
	I1010 17:31:34.021928   10838 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1010 17:31:34.026649   10838 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1010 17:31:34.027444   10838 api_server.go:141] control plane version: v1.34.1
	I1010 17:31:34.027477   10838 api_server.go:131] duration metric: took 5.564334ms to wait for apiserver health ...
	I1010 17:31:34.027488   10838 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 17:31:34.032705   10838 system_pods.go:59] 20 kube-system pods found
	I1010 17:31:34.032743   10838 system_pods.go:61] "amd-gpu-device-plugin-b5h8w" [fb6102b5-8464-4645-901a-f2f471fa6e63] Pending
	I1010 17:31:34.032760   10838 system_pods.go:61] "coredns-66bc5c9577-lpc4f" [b200196b-e5ba-474d-8cb8-3d2efaa0a804] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 17:31:34.032776   10838 system_pods.go:61] "csi-hostpath-attacher-0" [9b70703d-450d-4eb0-9ac8-149987429c8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1010 17:31:34.032791   10838 system_pods.go:61] "csi-hostpath-resizer-0" [4664e14f-70d0-44f8-a940-d484058aa2a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1010 17:31:34.032798   10838 system_pods.go:61] "csi-hostpathplugin-4g74f" [51c6b210-2fd8-4bf1-baa9-462eeb58c4ba] Pending
	I1010 17:31:34.032804   10838 system_pods.go:61] "etcd-addons-594989" [22899be9-2220-4a90-b3ac-dab0d5de26f6] Running
	I1010 17:31:34.032810   10838 system_pods.go:61] "kindnet-rbr7w" [eacdbf14-84fb-49cd-99f2-adc9b3e7914c] Running
	I1010 17:31:34.032815   10838 system_pods.go:61] "kube-apiserver-addons-594989" [2acd25d9-bff0-4093-bf22-15edb85febf2] Running
	I1010 17:31:34.032820   10838 system_pods.go:61] "kube-controller-manager-addons-594989" [e559c848-b52a-4078-a939-1ad3726dbef3] Running
	I1010 17:31:34.032832   10838 system_pods.go:61] "kube-ingress-dns-minikube" [99a30e52-981b-4bce-87c2-4aec7ec2120c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1010 17:31:34.032837   10838 system_pods.go:61] "kube-proxy-2st6b" [f1745076-7557-4cd8-9a96-b547386351a7] Running
	I1010 17:31:34.032850   10838 system_pods.go:61] "kube-scheduler-addons-594989" [dcda625c-1432-41ad-8a3b-733a797a7061] Running
	I1010 17:31:34.032861   10838 system_pods.go:61] "metrics-server-85b7d694d7-wccx5" [5b88af97-0885-40b1-acb2-8d58361a5fd0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 17:31:34.032874   10838 system_pods.go:61] "nvidia-device-plugin-daemonset-dlkfx" [a5f0e8a4-7957-414f-887d-b3cddab72a1e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1010 17:31:34.032886   10838 system_pods.go:61] "registry-66898fdd98-6gl8m" [7340a6ae-2ed3-4269-8f05-53911db0a12c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1010 17:31:34.032896   10838 system_pods.go:61] "registry-creds-764b6fb674-5k497" [b1404742-2d86-4ac9-91f6-3d70ff795aa1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1010 17:31:34.032906   10838 system_pods.go:61] "registry-proxy-8mr65" [9b638779-096b-4de7-a496-dfbca677f32f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1010 17:31:34.032917   10838 system_pods.go:61] "snapshot-controller-7d9fbc56b8-jt7fl" [fc6a34c5-3334-430b-9788-4218787bf9af] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1010 17:31:34.032927   10838 system_pods.go:61] "snapshot-controller-7d9fbc56b8-ktmdr" [06c7547b-8596-460f-90bd-a79685887c74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1010 17:31:34.032936   10838 system_pods.go:61] "storage-provisioner" [57838ac1-fa29-48b5-80ef-ff137e742296] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 17:31:34.032943   10838 system_pods.go:74] duration metric: took 5.448522ms to wait for pod list to return data ...
	I1010 17:31:34.032955   10838 default_sa.go:34] waiting for default service account to be created ...
	I1010 17:31:34.034806   10838 default_sa.go:45] found service account: "default"
	I1010 17:31:34.034821   10838 default_sa.go:55] duration metric: took 1.859126ms for default service account to be created ...
	I1010 17:31:34.034828   10838 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 17:31:34.039710   10838 system_pods.go:86] 20 kube-system pods found
	I1010 17:31:34.039738   10838 system_pods.go:89] "amd-gpu-device-plugin-b5h8w" [fb6102b5-8464-4645-901a-f2f471fa6e63] Pending
	I1010 17:31:34.039755   10838 system_pods.go:89] "coredns-66bc5c9577-lpc4f" [b200196b-e5ba-474d-8cb8-3d2efaa0a804] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 17:31:34.039764   10838 system_pods.go:89] "csi-hostpath-attacher-0" [9b70703d-450d-4eb0-9ac8-149987429c8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1010 17:31:34.039774   10838 system_pods.go:89] "csi-hostpath-resizer-0" [4664e14f-70d0-44f8-a940-d484058aa2a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1010 17:31:34.039797   10838 system_pods.go:89] "csi-hostpathplugin-4g74f" [51c6b210-2fd8-4bf1-baa9-462eeb58c4ba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1010 17:31:34.039803   10838 system_pods.go:89] "etcd-addons-594989" [22899be9-2220-4a90-b3ac-dab0d5de26f6] Running
	I1010 17:31:34.039810   10838 system_pods.go:89] "kindnet-rbr7w" [eacdbf14-84fb-49cd-99f2-adc9b3e7914c] Running
	I1010 17:31:34.039820   10838 system_pods.go:89] "kube-apiserver-addons-594989" [2acd25d9-bff0-4093-bf22-15edb85febf2] Running
	I1010 17:31:34.039825   10838 system_pods.go:89] "kube-controller-manager-addons-594989" [e559c848-b52a-4078-a939-1ad3726dbef3] Running
	I1010 17:31:34.039833   10838 system_pods.go:89] "kube-ingress-dns-minikube" [99a30e52-981b-4bce-87c2-4aec7ec2120c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1010 17:31:34.039838   10838 system_pods.go:89] "kube-proxy-2st6b" [f1745076-7557-4cd8-9a96-b547386351a7] Running
	I1010 17:31:34.039848   10838 system_pods.go:89] "kube-scheduler-addons-594989" [dcda625c-1432-41ad-8a3b-733a797a7061] Running
	I1010 17:31:34.039855   10838 system_pods.go:89] "metrics-server-85b7d694d7-wccx5" [5b88af97-0885-40b1-acb2-8d58361a5fd0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 17:31:34.039864   10838 system_pods.go:89] "nvidia-device-plugin-daemonset-dlkfx" [a5f0e8a4-7957-414f-887d-b3cddab72a1e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1010 17:31:34.039877   10838 system_pods.go:89] "registry-66898fdd98-6gl8m" [7340a6ae-2ed3-4269-8f05-53911db0a12c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1010 17:31:34.039885   10838 system_pods.go:89] "registry-creds-764b6fb674-5k497" [b1404742-2d86-4ac9-91f6-3d70ff795aa1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1010 17:31:34.039897   10838 system_pods.go:89] "registry-proxy-8mr65" [9b638779-096b-4de7-a496-dfbca677f32f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1010 17:31:34.039923   10838 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jt7fl" [fc6a34c5-3334-430b-9788-4218787bf9af] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1010 17:31:34.039981   10838 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ktmdr" [06c7547b-8596-460f-90bd-a79685887c74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1010 17:31:34.040007   10838 system_pods.go:89] "storage-provisioner" [57838ac1-fa29-48b5-80ef-ff137e742296] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 17:31:34.040024   10838 retry.go:31] will retry after 278.311892ms: missing components: kube-dns
	I1010 17:31:34.179698   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:34.324104   10838 system_pods.go:86] 20 kube-system pods found
	I1010 17:31:34.324146   10838 system_pods.go:89] "amd-gpu-device-plugin-b5h8w" [fb6102b5-8464-4645-901a-f2f471fa6e63] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1010 17:31:34.324157   10838 system_pods.go:89] "coredns-66bc5c9577-lpc4f" [b200196b-e5ba-474d-8cb8-3d2efaa0a804] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 17:31:34.324168   10838 system_pods.go:89] "csi-hostpath-attacher-0" [9b70703d-450d-4eb0-9ac8-149987429c8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1010 17:31:34.324176   10838 system_pods.go:89] "csi-hostpath-resizer-0" [4664e14f-70d0-44f8-a940-d484058aa2a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1010 17:31:34.324189   10838 system_pods.go:89] "csi-hostpathplugin-4g74f" [51c6b210-2fd8-4bf1-baa9-462eeb58c4ba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1010 17:31:34.324196   10838 system_pods.go:89] "etcd-addons-594989" [22899be9-2220-4a90-b3ac-dab0d5de26f6] Running
	I1010 17:31:34.324212   10838 system_pods.go:89] "kindnet-rbr7w" [eacdbf14-84fb-49cd-99f2-adc9b3e7914c] Running
	I1010 17:31:34.324227   10838 system_pods.go:89] "kube-apiserver-addons-594989" [2acd25d9-bff0-4093-bf22-15edb85febf2] Running
	I1010 17:31:34.324237   10838 system_pods.go:89] "kube-controller-manager-addons-594989" [e559c848-b52a-4078-a939-1ad3726dbef3] Running
	I1010 17:31:34.324250   10838 system_pods.go:89] "kube-ingress-dns-minikube" [99a30e52-981b-4bce-87c2-4aec7ec2120c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1010 17:31:34.324258   10838 system_pods.go:89] "kube-proxy-2st6b" [f1745076-7557-4cd8-9a96-b547386351a7] Running
	I1010 17:31:34.324265   10838 system_pods.go:89] "kube-scheduler-addons-594989" [dcda625c-1432-41ad-8a3b-733a797a7061] Running
	I1010 17:31:34.324276   10838 system_pods.go:89] "metrics-server-85b7d694d7-wccx5" [5b88af97-0885-40b1-acb2-8d58361a5fd0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 17:31:34.324296   10838 system_pods.go:89] "nvidia-device-plugin-daemonset-dlkfx" [a5f0e8a4-7957-414f-887d-b3cddab72a1e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1010 17:31:34.324305   10838 system_pods.go:89] "registry-66898fdd98-6gl8m" [7340a6ae-2ed3-4269-8f05-53911db0a12c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1010 17:31:34.324313   10838 system_pods.go:89] "registry-creds-764b6fb674-5k497" [b1404742-2d86-4ac9-91f6-3d70ff795aa1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1010 17:31:34.324321   10838 system_pods.go:89] "registry-proxy-8mr65" [9b638779-096b-4de7-a496-dfbca677f32f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1010 17:31:34.324328   10838 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jt7fl" [fc6a34c5-3334-430b-9788-4218787bf9af] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1010 17:31:34.324336   10838 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ktmdr" [06c7547b-8596-460f-90bd-a79685887c74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1010 17:31:34.324347   10838 system_pods.go:89] "storage-provisioner" [57838ac1-fa29-48b5-80ef-ff137e742296] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 17:31:34.324364   10838 retry.go:31] will retry after 344.769509ms: missing components: kube-dns
	I1010 17:31:34.423538   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:34.423660   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:34.423824   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:34.673857   10838 system_pods.go:86] 20 kube-system pods found
	I1010 17:31:34.673896   10838 system_pods.go:89] "amd-gpu-device-plugin-b5h8w" [fb6102b5-8464-4645-901a-f2f471fa6e63] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1010 17:31:34.673907   10838 system_pods.go:89] "coredns-66bc5c9577-lpc4f" [b200196b-e5ba-474d-8cb8-3d2efaa0a804] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 17:31:34.673918   10838 system_pods.go:89] "csi-hostpath-attacher-0" [9b70703d-450d-4eb0-9ac8-149987429c8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1010 17:31:34.673927   10838 system_pods.go:89] "csi-hostpath-resizer-0" [4664e14f-70d0-44f8-a940-d484058aa2a7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1010 17:31:34.673938   10838 system_pods.go:89] "csi-hostpathplugin-4g74f" [51c6b210-2fd8-4bf1-baa9-462eeb58c4ba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1010 17:31:34.673944   10838 system_pods.go:89] "etcd-addons-594989" [22899be9-2220-4a90-b3ac-dab0d5de26f6] Running
	I1010 17:31:34.673950   10838 system_pods.go:89] "kindnet-rbr7w" [eacdbf14-84fb-49cd-99f2-adc9b3e7914c] Running
	I1010 17:31:34.673956   10838 system_pods.go:89] "kube-apiserver-addons-594989" [2acd25d9-bff0-4093-bf22-15edb85febf2] Running
	I1010 17:31:34.673962   10838 system_pods.go:89] "kube-controller-manager-addons-594989" [e559c848-b52a-4078-a939-1ad3726dbef3] Running
	I1010 17:31:34.673977   10838 system_pods.go:89] "kube-ingress-dns-minikube" [99a30e52-981b-4bce-87c2-4aec7ec2120c] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1010 17:31:34.673984   10838 system_pods.go:89] "kube-proxy-2st6b" [f1745076-7557-4cd8-9a96-b547386351a7] Running
	I1010 17:31:34.673989   10838 system_pods.go:89] "kube-scheduler-addons-594989" [dcda625c-1432-41ad-8a3b-733a797a7061] Running
	I1010 17:31:34.673997   10838 system_pods.go:89] "metrics-server-85b7d694d7-wccx5" [5b88af97-0885-40b1-acb2-8d58361a5fd0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1010 17:31:34.674007   10838 system_pods.go:89] "nvidia-device-plugin-daemonset-dlkfx" [a5f0e8a4-7957-414f-887d-b3cddab72a1e] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1010 17:31:34.674019   10838 system_pods.go:89] "registry-66898fdd98-6gl8m" [7340a6ae-2ed3-4269-8f05-53911db0a12c] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1010 17:31:34.674030   10838 system_pods.go:89] "registry-creds-764b6fb674-5k497" [b1404742-2d86-4ac9-91f6-3d70ff795aa1] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1010 17:31:34.674038   10838 system_pods.go:89] "registry-proxy-8mr65" [9b638779-096b-4de7-a496-dfbca677f32f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1010 17:31:34.674060   10838 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jt7fl" [fc6a34c5-3334-430b-9788-4218787bf9af] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1010 17:31:34.674075   10838 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ktmdr" [06c7547b-8596-460f-90bd-a79685887c74] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1010 17:31:34.674083   10838 system_pods.go:89] "storage-provisioner" [57838ac1-fa29-48b5-80ef-ff137e742296] Running
	I1010 17:31:34.674093   10838 system_pods.go:126] duration metric: took 639.259847ms to wait for k8s-apps to be running ...
	I1010 17:31:34.674107   10838 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 17:31:34.674172   10838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 17:31:34.679588   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:34.687898   10838 system_svc.go:56] duration metric: took 13.786119ms WaitForService to wait for kubelet
	I1010 17:31:34.687924   10838 kubeadm.go:586] duration metric: took 42.347251677s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 17:31:34.687948   10838 node_conditions.go:102] verifying NodePressure condition ...
	I1010 17:31:34.690333   10838 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1010 17:31:34.690354   10838 node_conditions.go:123] node cpu capacity is 8
	I1010 17:31:34.690368   10838 node_conditions.go:105] duration metric: took 2.414779ms to run NodePressure ...
	I1010 17:31:34.690380   10838 start.go:241] waiting for startup goroutines ...
	I1010 17:31:34.864116   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:34.864242   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:34.877216   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:35.179637   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:35.364385   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:35.364526   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:35.465527   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:35.681028   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:35.865882   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:35.868880   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:35.878730   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:36.179746   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:36.364861   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:36.364974   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:36.377623   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:36.679628   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:36.864754   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:36.864930   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:36.877371   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:37.180121   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:37.365208   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:37.365382   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:37.378524   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:37.680273   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:37.864233   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:37.864273   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:37.878036   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:38.179458   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:38.365164   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:38.365482   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:38.378103   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:38.680130   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:38.865066   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:38.865109   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:38.878453   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:39.179729   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:39.364250   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:39.364404   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:39.380552   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:39.679134   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:39.864210   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:39.864252   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:39.877268   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:40.179768   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:40.364361   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:40.364425   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:40.377762   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:40.679336   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:40.863999   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:40.864205   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:40.877262   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:41.179800   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:41.364948   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:41.365177   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:41.377960   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:41.680087   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:41.864988   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:41.865136   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:41.876987   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:42.179503   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:42.364635   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:42.364787   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:42.377251   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:42.680826   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:42.864858   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:42.864985   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:42.877589   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:43.180329   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:43.363738   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:43.363911   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:43.376676   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:43.680406   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:43.864304   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:43.864386   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:43.877640   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:44.178901   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:44.364806   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:44.364859   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:44.376949   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:44.679774   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:44.864540   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:44.864571   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:44.878350   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:45.179506   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:45.364248   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:45.364247   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:45.377236   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:45.679944   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:45.864880   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:45.864949   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:45.877479   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:46.179802   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:46.364604   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:46.364650   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:46.376813   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:46.679948   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:46.865093   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:46.865175   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:46.878007   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:47.179502   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:47.364574   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:47.364638   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:47.377742   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:47.680459   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:47.864766   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:47.864803   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:47.876992   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:48.179541   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:48.364500   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:48.364689   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:48.377233   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:48.680843   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:48.864768   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:48.864819   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:48.876779   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:49.179354   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:49.364092   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:49.364122   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:49.377418   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:49.679511   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:49.864429   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:49.864604   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:49.877165   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:50.179814   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:50.365292   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:50.365359   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:50.378605   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:50.680140   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:50.864227   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:50.864293   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:50.877296   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:51.179640   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:51.364553   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:51.364863   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:51.377237   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:51.680257   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:51.864086   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:51.864132   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:51.877599   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:52.180164   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:52.363985   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:52.364092   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:52.377104   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:52.680303   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:52.864028   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:52.864034   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:52.877168   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:53.179466   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:53.363885   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:53.363983   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:53.376654   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:53.679251   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:53.863648   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:53.863680   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:53.876685   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:54.179765   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:54.364355   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:54.364369   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:54.377446   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:54.680849   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:54.866225   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:54.866382   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:54.879087   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:55.182909   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:55.365646   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:55.365824   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:55.378432   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:55.680463   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:55.865453   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:55.865548   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:55.878683   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:56.180230   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:56.364681   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:56.364765   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:56.378999   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:56.680675   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:56.864790   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:56.864958   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:56.877911   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:57.180564   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:57.364713   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:57.364759   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:57.377887   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:57.679725   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:57.936971   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:57.937204   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:57.937284   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:58.179856   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:58.364919   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:58.364949   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:58.378077   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:58.680279   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:58.864141   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:58.864317   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:58.877950   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:59.179955   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:59.364898   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:59.365365   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:59.377806   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:31:59.679339   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:31:59.864064   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:31:59.864255   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:31:59.877339   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:00.179971   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:00.365527   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:00.365556   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:00.377291   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:00.680349   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:00.864229   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:00.864259   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:00.878034   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:01.179563   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:01.364695   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:01.364734   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:01.377140   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:01.680110   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:01.863737   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:01.863859   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:01.877267   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:02.180215   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:02.363991   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:02.364078   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:02.377857   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:02.680117   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:02.865211   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:02.865419   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:02.878371   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:03.180172   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:03.206293   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:32:03.365313   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:03.365529   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:03.378024   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:03.679851   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1010 17:32:03.801261   10838 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:32:03.801292   10838 retry.go:31] will retry after 20.604988317s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1010 17:32:03.864307   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:03.864336   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:03.878514   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:04.180166   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:04.365025   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:04.365088   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:04.378292   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:04.679966   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:04.864643   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:04.864812   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:04.876903   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:05.179258   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:05.364537   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:05.364562   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:05.378104   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:05.680431   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:05.864456   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:05.864550   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:05.877625   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:06.180676   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:06.365244   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:06.365336   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:06.378454   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:06.680494   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:06.864775   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:06.864843   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:06.876952   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:07.179643   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:07.364558   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:07.364621   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:07.377890   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:07.679702   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:07.864547   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:07.864711   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:07.877705   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:08.180363   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:08.363958   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:08.364087   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:08.377419   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:08.679389   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:08.864242   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:08.864346   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:08.877601   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:09.180395   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:09.364128   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:09.364163   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:09.377424   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:09.680393   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:09.864046   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:09.864114   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:09.877166   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:10.179483   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:10.364481   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:10.364503   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:10.377580   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:10.680365   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:10.864077   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:10.864187   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:10.877210   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:11.179368   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:11.364360   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:11.364399   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:11.377797   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:11.679323   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:11.864166   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:11.864223   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:11.877515   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:12.180081   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:12.363636   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:12.363780   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:12.378089   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:12.680598   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:12.864322   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:12.864364   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:12.877776   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:13.179176   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:13.363749   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:13.363766   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:13.376880   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:13.679372   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:13.864089   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:13.864134   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:13.877143   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:14.179755   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:14.364637   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:14.364669   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:14.377657   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:14.679662   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:14.864192   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:14.864345   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:14.877692   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:15.180438   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:15.363854   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:15.363893   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:15.376872   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:15.679528   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:15.864369   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:15.864400   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:15.877869   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:16.179029   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:16.363646   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:16.363826   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:16.376990   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:16.679717   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:16.864610   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:16.864610   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:16.877769   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:17.178859   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:17.364531   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:17.364656   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:17.376872   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:17.679875   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:17.865068   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:17.865122   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:17.877509   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:18.180281   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:18.363786   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:18.363869   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:18.376937   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:18.679474   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:18.864319   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:18.864319   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:18.877328   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:19.180320   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:19.364572   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:19.364583   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:19.377730   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:19.679446   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:19.864605   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:19.864610   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:19.876533   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:20.179755   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:20.364683   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:20.364693   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:20.377873   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:20.680045   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:20.863572   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:20.863745   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:20.877009   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:21.179348   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:21.364392   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:21.364517   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:21.378236   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:21.680338   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:21.864167   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:21.864299   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:21.878079   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:22.180150   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:22.363779   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:22.363833   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:22.377452   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:22.680886   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:22.865108   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:22.865133   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:22.877864   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:23.179520   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:23.364292   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1010 17:32:23.364408   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:23.377663   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:23.680802   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:23.864919   10838 kapi.go:107] duration metric: took 1m30.003883582s to wait for kubernetes.io/minikube-addons=registry ...
	I1010 17:32:23.864953   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:23.878082   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:24.180238   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:24.365281   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:24.378180   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:24.407179   10838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1010 17:32:24.680859   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:24.865396   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:24.878889   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1010 17:32:25.139804   10838 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1010 17:32:25.139918   10838 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1010 17:32:25.179535   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:25.364479   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:25.378141   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:25.679118   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:25.864663   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:25.878656   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:26.179169   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:26.364762   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:26.377084   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:26.679950   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:26.864597   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:26.878152   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:27.179644   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:27.364239   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:27.377922   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:27.679650   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:27.865021   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:27.877914   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:28.180124   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1010 17:32:28.365023   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:28.377281   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:28.683835   10838 kapi.go:107] duration metric: took 1m28.007083421s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1010 17:32:28.686213   10838 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-594989 cluster.
	I1010 17:32:28.687939   10838 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1010 17:32:28.689482   10838 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1010 17:32:28.865129   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:28.877712   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:29.365521   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:29.379726   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:29.865019   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:29.877846   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:30.365037   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:30.377777   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:30.864926   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:30.877237   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:31.365638   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:31.378238   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:31.864453   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:31.877971   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:32.365110   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:32.378287   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:32.864596   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:32.877667   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:33.364314   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:33.377466   10838 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1010 17:32:33.864983   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:33.877197   10838 kapi.go:107] duration metric: took 1m39.502719833s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1010 17:32:34.364547   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:34.864996   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:35.364595   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:35.865061   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:36.364954   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:36.864108   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:37.364461   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:37.865039   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:38.364399   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:38.867417   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:39.364949   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:39.863887   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:40.365017   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:40.869313   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:41.365525   10838 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1010 17:32:41.865199   10838 kapi.go:107] duration metric: took 1m48.004153058s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1010 17:32:41.867540   10838 out.go:179] * Enabled addons: ingress-dns, registry-creds, nvidia-device-plugin, amd-gpu-device-plugin, cloud-spanner, storage-provisioner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1010 17:32:41.868505   10838 addons.go:514] duration metric: took 1m49.527783965s for enable addons: enabled=[ingress-dns registry-creds nvidia-device-plugin amd-gpu-device-plugin cloud-spanner storage-provisioner metrics-server yakd default-storageclass volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1010 17:32:41.868544   10838 start.go:246] waiting for cluster config update ...
	I1010 17:32:41.868560   10838 start.go:255] writing updated cluster config ...
	I1010 17:32:41.868777   10838 ssh_runner.go:195] Run: rm -f paused
	I1010 17:32:41.872602   10838 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 17:32:41.875236   10838 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lpc4f" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 17:32:41.878956   10838 pod_ready.go:94] pod "coredns-66bc5c9577-lpc4f" is "Ready"
	I1010 17:32:41.878972   10838 pod_ready.go:86] duration metric: took 3.719318ms for pod "coredns-66bc5c9577-lpc4f" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 17:32:41.880742   10838 pod_ready.go:83] waiting for pod "etcd-addons-594989" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 17:32:41.883875   10838 pod_ready.go:94] pod "etcd-addons-594989" is "Ready"
	I1010 17:32:41.883896   10838 pod_ready.go:86] duration metric: took 3.137197ms for pod "etcd-addons-594989" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 17:32:41.885470   10838 pod_ready.go:83] waiting for pod "kube-apiserver-addons-594989" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 17:32:41.888398   10838 pod_ready.go:94] pod "kube-apiserver-addons-594989" is "Ready"
	I1010 17:32:41.888415   10838 pod_ready.go:86] duration metric: took 2.929841ms for pod "kube-apiserver-addons-594989" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 17:32:41.889899   10838 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-594989" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 17:32:42.276344   10838 pod_ready.go:94] pod "kube-controller-manager-addons-594989" is "Ready"
	I1010 17:32:42.276371   10838 pod_ready.go:86] duration metric: took 386.456707ms for pod "kube-controller-manager-addons-594989" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 17:32:42.476275   10838 pod_ready.go:83] waiting for pod "kube-proxy-2st6b" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 17:32:42.876238   10838 pod_ready.go:94] pod "kube-proxy-2st6b" is "Ready"
	I1010 17:32:42.876265   10838 pod_ready.go:86] duration metric: took 399.963725ms for pod "kube-proxy-2st6b" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 17:32:43.076004   10838 pod_ready.go:83] waiting for pod "kube-scheduler-addons-594989" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 17:32:43.476342   10838 pod_ready.go:94] pod "kube-scheduler-addons-594989" is "Ready"
	I1010 17:32:43.476368   10838 pod_ready.go:86] duration metric: took 400.341354ms for pod "kube-scheduler-addons-594989" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 17:32:43.476377   10838 pod_ready.go:40] duration metric: took 1.603755753s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 17:32:43.519373   10838 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1010 17:32:43.521477   10838 out.go:179] * Done! kubectl is now configured to use "addons-594989" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 10 17:32:44 addons-594989 crio[799]: time="2025-10-10T17:32:44.370269139Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 10 17:32:44 addons-594989 crio[799]: time="2025-10-10T17:32:44.371094819Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 10 17:32:44 addons-594989 crio[799]: time="2025-10-10T17:32:44.371849358Z" level=info msg="Ran pod sandbox 03f62b61efde38d5352b2bc659e02f12ac88387b13662e0aec8d0f849f115497 with infra container: default/busybox/POD" id=64aa5938-5847-480b-9fe4-f135bf51a8f9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 10 17:32:44 addons-594989 crio[799]: time="2025-10-10T17:32:44.373018575Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2f2006e1-00b0-4848-8442-d0c7015c0f1f name=/runtime.v1.ImageService/ImageStatus
	Oct 10 17:32:44 addons-594989 crio[799]: time="2025-10-10T17:32:44.373174277Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=2f2006e1-00b0-4848-8442-d0c7015c0f1f name=/runtime.v1.ImageService/ImageStatus
	Oct 10 17:32:44 addons-594989 crio[799]: time="2025-10-10T17:32:44.373206139Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=2f2006e1-00b0-4848-8442-d0c7015c0f1f name=/runtime.v1.ImageService/ImageStatus
	Oct 10 17:32:44 addons-594989 crio[799]: time="2025-10-10T17:32:44.373722287Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b1bee913-2c00-4ac9-ab1b-abad91f8418e name=/runtime.v1.ImageService/PullImage
	Oct 10 17:32:44 addons-594989 crio[799]: time="2025-10-10T17:32:44.375176269Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 10 17:32:46 addons-594989 crio[799]: time="2025-10-10T17:32:46.214110645Z" level=info msg="Removing container: 5e0c76cba254296d25acc0abee46681670b9aebdce72e5878932d927a09ed43c" id=8ea6f633-e138-4a2d-9176-f440e020910e name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 10 17:32:46 addons-594989 crio[799]: time="2025-10-10T17:32:46.220361894Z" level=info msg="Removed container 5e0c76cba254296d25acc0abee46681670b9aebdce72e5878932d927a09ed43c: gcp-auth/gcp-auth-certs-create-hw7sf/create" id=8ea6f633-e138-4a2d-9176-f440e020910e name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 10 17:32:46 addons-594989 crio[799]: time="2025-10-10T17:32:46.222862505Z" level=info msg="Stopping pod sandbox: 0b6fa65e2aa0febd0f7bda2386470382f4fc88d3628ed637ec7006d9b1a9d742" id=83395207-72f5-4477-9fc4-8a4a140495bd name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 10 17:32:46 addons-594989 crio[799]: time="2025-10-10T17:32:46.222920563Z" level=info msg="Stopped pod sandbox (already stopped): 0b6fa65e2aa0febd0f7bda2386470382f4fc88d3628ed637ec7006d9b1a9d742" id=83395207-72f5-4477-9fc4-8a4a140495bd name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 10 17:32:46 addons-594989 crio[799]: time="2025-10-10T17:32:46.223315382Z" level=info msg="Removing pod sandbox: 0b6fa65e2aa0febd0f7bda2386470382f4fc88d3628ed637ec7006d9b1a9d742" id=1edfa225-b487-48f3-b087-efe2ea975a3f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 10 17:32:46 addons-594989 crio[799]: time="2025-10-10T17:32:46.226243926Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 10 17:32:46 addons-594989 crio[799]: time="2025-10-10T17:32:46.22629586Z" level=info msg="Removed pod sandbox: 0b6fa65e2aa0febd0f7bda2386470382f4fc88d3628ed637ec7006d9b1a9d742" id=1edfa225-b487-48f3-b087-efe2ea975a3f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 10 17:32:47 addons-594989 crio[799]: time="2025-10-10T17:32:47.245715734Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=b1bee913-2c00-4ac9-ab1b-abad91f8418e name=/runtime.v1.ImageService/PullImage
	Oct 10 17:32:47 addons-594989 crio[799]: time="2025-10-10T17:32:47.246302872Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e08e5916-c047-4da0-9f9e-5fddb8fb9f53 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 17:32:47 addons-594989 crio[799]: time="2025-10-10T17:32:47.24756344Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f8632bca-a1b7-423d-9264-5ca484bf6c3b name=/runtime.v1.ImageService/ImageStatus
	Oct 10 17:32:47 addons-594989 crio[799]: time="2025-10-10T17:32:47.250570198Z" level=info msg="Creating container: default/busybox/busybox" id=ff9fa3ba-3d63-472b-8e8e-f1567be4ec92 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 17:32:47 addons-594989 crio[799]: time="2025-10-10T17:32:47.251406704Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 17:32:47 addons-594989 crio[799]: time="2025-10-10T17:32:47.256439647Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 17:32:47 addons-594989 crio[799]: time="2025-10-10T17:32:47.256844476Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 17:32:47 addons-594989 crio[799]: time="2025-10-10T17:32:47.288958951Z" level=info msg="Created container 1ed879b0d53cd382403759d0a2d92c8241541638cb5b8ca9a2dc075319aaeb97: default/busybox/busybox" id=ff9fa3ba-3d63-472b-8e8e-f1567be4ec92 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 17:32:47 addons-594989 crio[799]: time="2025-10-10T17:32:47.289559958Z" level=info msg="Starting container: 1ed879b0d53cd382403759d0a2d92c8241541638cb5b8ca9a2dc075319aaeb97" id=54e8eb9d-91b9-4f4e-b61a-595745f50916 name=/runtime.v1.RuntimeService/StartContainer
	Oct 10 17:32:47 addons-594989 crio[799]: time="2025-10-10T17:32:47.291411893Z" level=info msg="Started container" PID=6497 containerID=1ed879b0d53cd382403759d0a2d92c8241541638cb5b8ca9a2dc075319aaeb97 description=default/busybox/busybox id=54e8eb9d-91b9-4f4e-b61a-595745f50916 name=/runtime.v1.RuntimeService/StartContainer sandboxID=03f62b61efde38d5352b2bc659e02f12ac88387b13662e0aec8d0f849f115497
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	1ed879b0d53cd       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          8 seconds ago        Running             busybox                                  0                   03f62b61efde3       busybox                                    default
	e8d77cbefbfdb       registry.k8s.io/ingress-nginx/controller@sha256:cfcddeb96818021113c47ca3db866d083e80550444ed5f24fdc76f66911db270                             14 seconds ago       Running             controller                               0                   91522c7108af4       ingress-nginx-controller-9cc49f96f-szmc7   ingress-nginx
	5e95cdad96822       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          23 seconds ago       Running             csi-snapshotter                          0                   0de4c981d5c38       csi-hostpathplugin-4g74f                   kube-system
	d699fc1ff60de       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          24 seconds ago       Running             csi-provisioner                          0                   0de4c981d5c38       csi-hostpathplugin-4g74f                   kube-system
	f9378118d907d       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            25 seconds ago       Running             liveness-probe                           0                   0de4c981d5c38       csi-hostpathplugin-4g74f                   kube-system
	678a2f9830be7       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           25 seconds ago       Running             hostpath                                 0                   0de4c981d5c38       csi-hostpathplugin-4g74f                   kube-system
	ad42a3a9aced0       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                26 seconds ago       Running             node-driver-registrar                    0                   0de4c981d5c38       csi-hostpathplugin-4g74f                   kube-system
	11a226756b852       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 27 seconds ago       Running             gcp-auth                                 0                   eed2e5a4248c0       gcp-auth-78565c9fb4-nq7rp                  gcp-auth
	1adac014fd3a3       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            30 seconds ago       Running             gadget                                   0                   abbd9f78ebfd3       gadget-cntr6                               gadget
	b55d72508fae2       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              33 seconds ago       Running             registry-proxy                           0                   6ec8e8ba0baa5       registry-proxy-8mr65                       kube-system
	901121a197604       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     36 seconds ago       Running             amd-gpu-device-plugin                    0                   22a51adb6b88a       amd-gpu-device-plugin-b5h8w                kube-system
	78ce271903e84       8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65                                                                             36 seconds ago       Exited              patch                                    2                   d7eb0e5c33546       ingress-nginx-admission-patch-vvdlx        ingress-nginx
	fc03eed646aae       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:316cd3217236293ba00ab9b5eac4056b15d9ab870b3eeeeb99e0d9139a608aa3                   37 seconds ago       Exited              patch                                    0                   29d5fc073fb47       gcp-auth-certs-patch-p6rff                 gcp-auth
	071e94df1917c       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   37 seconds ago       Running             csi-external-health-monitor-controller   0                   0de4c981d5c38       csi-hostpathplugin-4g74f                   kube-system
	0a23ff7a6d094       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:316cd3217236293ba00ab9b5eac4056b15d9ab870b3eeeeb99e0d9139a608aa3                   49 seconds ago       Exited              create                                   0                   988021fafc5da       ingress-nginx-admission-create-4djcf       ingress-nginx
	c9f157c863480       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             49 seconds ago       Running             csi-attacher                             0                   4604e2fdc7888       csi-hostpath-attacher-0                    kube-system
	6031890f647ec       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              50 seconds ago       Running             csi-resizer                              0                   884802be82161       csi-hostpath-resizer-0                     kube-system
	481780e193071       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             52 seconds ago       Running             local-path-provisioner                   0                   3d564bd36f7bb       local-path-provisioner-648f6765c9-qr9vc    local-path-storage
	7325b7e01b366       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     54 seconds ago       Running             nvidia-device-plugin-ctr                 0                   e6bf113f7eb81       nvidia-device-plugin-daemonset-dlkfx       kube-system
	19825e2ee8b34       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   16b4cced62352       snapshot-controller-7d9fbc56b8-jt7fl       kube-system
	4ab90120209a5       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              About a minute ago   Running             yakd                                     0                   b428920959f95       yakd-dashboard-5ff678cb9-xsjmw             yakd-dashboard
	22fc52febdf0c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   13a0f1a77192a       snapshot-controller-7d9fbc56b8-ktmdr       kube-system
	cf1d9072746c6       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               About a minute ago   Running             cloud-spanner-emulator                   0                   ec4a68a00cb3d       cloud-spanner-emulator-86bd5cbb97-55bl8    default
	b770cbeea4ac5       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        About a minute ago   Running             metrics-server                           0                   f529ade8c25da       metrics-server-85b7d694d7-wccx5            kube-system
	a6f2b6c587bcc       docker.io/library/registry@sha256:42be4a75b921489e51574e12889b71484a6524a02c4008c52c6f26ce30c7b990                                           About a minute ago   Running             registry                                 0                   dd712ee62a05a       registry-66898fdd98-6gl8m                  kube-system
	0e80700c17777       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               About a minute ago   Running             minikube-ingress-dns                     0                   c387030f9c3e9       kube-ingress-dns-minikube                  kube-system
	8cab3f92e9e88       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago   Running             coredns                                  0                   2fbdd5e041e28       coredns-66bc5c9577-lpc4f                   kube-system
	8cb0bc1946c2e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   c8086bebda554       storage-provisioner                        kube-system
	a664c4cd86a07       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             2 minutes ago        Running             kindnet-cni                              0                   034b9c6ba72c6       kindnet-rbr7w                              kube-system
	4f4668380d008       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             2 minutes ago        Running             kube-proxy                               0                   d1bcbc8fa6936       kube-proxy-2st6b                           kube-system
	c1f6da858e936       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             2 minutes ago        Running             kube-apiserver                           0                   0e883274ec80a       kube-apiserver-addons-594989               kube-system
	03911015ab5c0       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             2 minutes ago        Running             kube-controller-manager                  0                   4b5365310a2c7       kube-controller-manager-addons-594989      kube-system
	8643869dd690c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             2 minutes ago        Running             etcd                                     0                   ab90261f3deb4       etcd-addons-594989                         kube-system
	426cb7351d8b7       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             2 minutes ago        Running             kube-scheduler                           0                   71615bf9c7690       kube-scheduler-addons-594989               kube-system
	
	
	==> coredns [8cab3f92e9e88acd6ccdda17457d84c2208a2db36f61d1769c76a89b89d5c06c] <==
	[INFO] 10.244.0.19:40726 - 5381 "AAAA IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.003777971s
	[INFO] 10.244.0.19:40625 - 37981 "AAAA IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000077291s
	[INFO] 10.244.0.19:40625 - 37668 "A IN registry.kube-system.svc.cluster.local.europe-west4-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.00010039s
	[INFO] 10.244.0.19:43851 - 64578 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000075219s
	[INFO] 10.244.0.19:43851 - 64798 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000096367s
	[INFO] 10.244.0.19:33218 - 58549 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000063427s
	[INFO] 10.244.0.19:33218 - 58299 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.00008853s
	[INFO] 10.244.0.19:35474 - 15469 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000194817s
	[INFO] 10.244.0.19:35474 - 15698 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000246163s
	[INFO] 10.244.0.21:60343 - 60127 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000190593s
	[INFO] 10.244.0.21:47049 - 25152 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000264405s
	[INFO] 10.244.0.21:60992 - 12145 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000125565s
	[INFO] 10.244.0.21:59396 - 35820 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000190677s
	[INFO] 10.244.0.21:50813 - 38958 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000138878s
	[INFO] 10.244.0.21:35770 - 56798 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000199124s
	[INFO] 10.244.0.21:58002 - 46701 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003696741s
	[INFO] 10.244.0.21:60425 - 26995 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.004755982s
	[INFO] 10.244.0.21:57814 - 55303 "AAAA IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.004793153s
	[INFO] 10.244.0.21:58354 - 11892 "A IN storage.googleapis.com.europe-west4-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.00773583s
	[INFO] 10.244.0.21:59984 - 55269 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004766072s
	[INFO] 10.244.0.21:60345 - 21079 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.007612186s
	[INFO] 10.244.0.21:37302 - 42381 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004734411s
	[INFO] 10.244.0.21:48806 - 44788 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004884932s
	[INFO] 10.244.0.21:51349 - 64954 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000869586s
	[INFO] 10.244.0.21:59551 - 47721 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001670272s
	
	
	==> describe nodes <==
	Name:               addons-594989
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-594989
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad692bf4ab89f0e135b80e730ae25010479ecc46
	                    minikube.k8s.io/name=addons-594989
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_10T17_30_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-594989
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-594989"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 10 Oct 2025 17:30:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-594989
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 10 Oct 2025 17:32:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 10 Oct 2025 17:32:49 +0000   Fri, 10 Oct 2025 17:30:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 10 Oct 2025 17:32:49 +0000   Fri, 10 Oct 2025 17:30:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 10 Oct 2025 17:32:49 +0000   Fri, 10 Oct 2025 17:30:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 10 Oct 2025 17:32:49 +0000   Fri, 10 Oct 2025 17:31:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-594989
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 6694834041ede3e9eb1b67e168e90e0c
	  System UUID:                bc1320a3-f798-4c40-8baa-c6409dc2b259
	  Boot ID:                    830c8438-99e6-48ba-b543-66e651cad0c8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     cloud-spanner-emulator-86bd5cbb97-55bl8     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  gadget                      gadget-cntr6                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  gcp-auth                    gcp-auth-78565c9fb4-nq7rp                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-szmc7    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         2m2s
	  kube-system                 amd-gpu-device-plugin-b5h8w                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 coredns-66bc5c9577-lpc4f                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m3s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 csi-hostpathplugin-4g74f                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 etcd-addons-594989                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m9s
	  kube-system                 kindnet-rbr7w                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m4s
	  kube-system                 kube-apiserver-addons-594989                250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-controller-manager-addons-594989       200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-2st6b                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-scheduler-addons-594989                100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 metrics-server-85b7d694d7-wccx5             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         2m2s
	  kube-system                 nvidia-device-plugin-daemonset-dlkfx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 registry-66898fdd98-6gl8m                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 registry-creds-764b6fb674-5k497             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 registry-proxy-8mr65                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 snapshot-controller-7d9fbc56b8-jt7fl        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 snapshot-controller-7d9fbc56b8-ktmdr        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  local-path-storage          local-path-provisioner-648f6765c9-qr9vc     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-xsjmw              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     2m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m2s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  2m14s (x8 over 2m15s)  kubelet          Node addons-594989 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m14s (x8 over 2m15s)  kubelet          Node addons-594989 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m14s (x8 over 2m15s)  kubelet          Node addons-594989 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m9s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m9s                   kubelet          Node addons-594989 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m9s                   kubelet          Node addons-594989 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m9s                   kubelet          Node addons-594989 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m5s                   node-controller  Node addons-594989 event: Registered Node addons-594989 in Controller
	  Normal  NodeReady                82s                    kubelet          Node addons-594989 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct10 17:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.000999] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.084012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.379095] i8042: Warning: Keylock active
	[  +0.013383] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.004173] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000723] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000698] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000926] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000834] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000961] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000723] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000996] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000881] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.444659] block sda: the capability attribute has been deprecated.
	[  +0.077121] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.021628] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.602398] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [8643869dd690c538be7e9ae88ed91a5133d80d777f7f05864080e0071de6ce07] <==
	{"level":"warn","ts":"2025-10-10T17:30:43.454221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:30:43.460019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:30:43.468161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:30:43.473662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:30:43.479066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:30:43.484576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:30:43.490240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:30:43.501834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:30:43.507447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:30:43.514355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:30:43.520386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:30:43.526042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:30:43.531699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:30:43.548018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:30:43.553808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:30:43.560179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:30:43.608486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:30:54.944092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:30:54.950362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:31:20.982396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:31:20.989357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:31:21.004912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:31:21.012218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55068","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-10T17:31:57.848435Z","caller":"traceutil/trace.go:172","msg":"trace[880933500] transaction","detail":"{read_only:false; response_revision:1066; number_of_response:1; }","duration":"109.517609ms","start":"2025-10-10T17:31:57.738902Z","end":"2025-10-10T17:31:57.848420Z","steps":["trace[880933500] 'process raft request'  (duration: 109.407338ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-10T17:32:40.992608Z","caller":"traceutil/trace.go:172","msg":"trace[1190951252] transaction","detail":"{read_only:false; response_revision:1272; number_of_response:1; }","duration":"121.840409ms","start":"2025-10-10T17:32:40.870748Z","end":"2025-10-10T17:32:40.992589Z","steps":["trace[1190951252] 'process raft request'  (duration: 121.726607ms)"],"step_count":1}
	
	
	==> gcp-auth [11a226756b852a088f5077fda3513adc69259071ed3ae91bb0c3d326dd67d983] <==
	2025/10/10 17:32:28 GCP Auth Webhook started!
	2025/10/10 17:32:43 Ready to marshal response ...
	2025/10/10 17:32:43 Ready to write response ...
	2025/10/10 17:32:44 Ready to marshal response ...
	2025/10/10 17:32:44 Ready to write response ...
	2025/10/10 17:32:44 Ready to marshal response ...
	2025/10/10 17:32:44 Ready to write response ...
	
	
	==> kernel <==
	 17:32:56 up 15 min,  0 user,  load average: 1.63, 0.87, 0.34
	Linux addons-594989 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a664c4cd86a07ae3da31b7161b1ffcb861502990a087baf33bb177718c331505] <==
	E1010 17:31:23.286610       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1010 17:31:23.287634       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1010 17:31:23.287645       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1010 17:31:23.287724       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1010 17:31:24.887131       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1010 17:31:24.887169       1 metrics.go:72] Registering metrics
	I1010 17:31:24.887335       1 controller.go:711] "Syncing nftables rules"
	I1010 17:31:33.292097       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:31:33.292155       1 main.go:301] handling current node
	I1010 17:31:43.286278       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:31:43.286333       1 main.go:301] handling current node
	I1010 17:31:53.285999       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:31:53.286030       1 main.go:301] handling current node
	I1010 17:32:03.286287       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:32:03.286354       1 main.go:301] handling current node
	I1010 17:32:13.286345       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:32:13.286376       1 main.go:301] handling current node
	I1010 17:32:23.286009       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:32:23.286045       1 main.go:301] handling current node
	I1010 17:32:33.285894       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:32:33.285939       1 main.go:301] handling current node
	I1010 17:32:43.286236       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:32:43.286271       1 main.go:301] handling current node
	I1010 17:32:53.285902       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:32:53.285929       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c1f6da858e936ea82a29d1bd82704a1271d21a8c3ef131087c8a1ffc041f909d] <==
	I1010 17:31:00.618166       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.96.194.163"}
	W1010 17:31:20.982338       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1010 17:31:20.989323       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1010 17:31:21.004865       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1010 17:31:21.012204       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1010 17:31:33.854408       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.194.163:443: connect: connection refused
	E1010 17:31:33.854457       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.194.163:443: connect: connection refused" logger="UnhandledError"
	W1010 17:31:33.854478       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.194.163:443: connect: connection refused
	E1010 17:31:33.854513       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.194.163:443: connect: connection refused" logger="UnhandledError"
	W1010 17:31:33.873138       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.194.163:443: connect: connection refused
	E1010 17:31:33.873174       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.194.163:443: connect: connection refused" logger="UnhandledError"
	W1010 17:31:33.873146       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.194.163:443: connect: connection refused
	E1010 17:31:33.873279       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.194.163:443: connect: connection refused" logger="UnhandledError"
	W1010 17:31:44.407623       1 handler_proxy.go:99] no RequestInfo found in the context
	E1010 17:31:44.407699       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1010 17:31:44.407751       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.205.41:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.205.41:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.205.41:443: connect: connection refused" logger="UnhandledError"
	E1010 17:31:44.409180       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.205.41:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.205.41:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.205.41:443: connect: connection refused" logger="UnhandledError"
	E1010 17:31:44.414920       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.205.41:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.205.41:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.205.41:443: connect: connection refused" logger="UnhandledError"
	E1010 17:31:44.435650       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.205.41:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.205.41:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.205.41:443: connect: connection refused" logger="UnhandledError"
	I1010 17:31:44.502672       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1010 17:32:54.199837       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54012: use of closed network connection
	E1010 17:32:54.347926       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54034: use of closed network connection
	
	
	==> kube-controller-manager [03911015ab5c009ca66dea098febf11b6b587cc8665f0cf85bf894a7d24caf04] <==
	I1010 17:30:50.965688       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1010 17:30:50.965695       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1010 17:30:50.965926       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1010 17:30:50.965968       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1010 17:30:50.965994       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1010 17:30:50.966191       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1010 17:30:50.966226       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1010 17:30:50.966294       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1010 17:30:50.966567       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1010 17:30:50.968165       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1010 17:30:50.968280       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1010 17:30:50.968366       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1010 17:30:50.969331       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1010 17:30:50.973225       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1010 17:30:50.975460       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1010 17:30:50.979704       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1010 17:30:50.984937       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1010 17:31:20.976991       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1010 17:31:20.977170       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1010 17:31:20.977206       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1010 17:31:20.995575       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1010 17:31:20.999441       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1010 17:31:21.078240       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1010 17:31:21.099832       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1010 17:31:35.920817       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4f4668380d0085c1200a82058cc3e69994ce54d202cd46003f4aeb1592745336] <==
	I1010 17:30:52.842520       1 server_linux.go:53] "Using iptables proxy"
	I1010 17:30:53.242118       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1010 17:30:53.343278       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1010 17:30:53.343387       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1010 17:30:53.343511       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1010 17:30:53.517826       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1010 17:30:53.517976       1 server_linux.go:132] "Using iptables Proxier"
	I1010 17:30:53.528718       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1010 17:30:53.537312       1 server.go:527] "Version info" version="v1.34.1"
	I1010 17:30:53.537552       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 17:30:53.539916       1 config.go:200] "Starting service config controller"
	I1010 17:30:53.540002       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1010 17:30:53.540064       1 config.go:106] "Starting endpoint slice config controller"
	I1010 17:30:53.541365       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1010 17:30:53.540527       1 config.go:403] "Starting serviceCIDR config controller"
	I1010 17:30:53.541450       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1010 17:30:53.541071       1 config.go:309] "Starting node config controller"
	I1010 17:30:53.541499       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1010 17:30:53.541524       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1010 17:30:53.640370       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1010 17:30:53.641583       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1010 17:30:53.641658       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [426cb7351d8b7ffa8dc04159ce227020bab5b313130d3d7ea54e381c5e1ff403] <==
	E1010 17:30:43.997541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1010 17:30:43.997570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1010 17:30:43.997609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1010 17:30:43.997627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1010 17:30:43.997672       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1010 17:30:43.997676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1010 17:30:43.996546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1010 17:30:43.997682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1010 17:30:43.997698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1010 17:30:43.997794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1010 17:30:43.997847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1010 17:30:43.997852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1010 17:30:44.885991       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1010 17:30:45.044912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1010 17:30:45.069471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1010 17:30:45.088704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1010 17:30:45.091700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1010 17:30:45.107907       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1010 17:30:45.109733       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1010 17:30:45.122661       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1010 17:30:45.172771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1010 17:30:45.186692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1010 17:30:45.191574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1010 17:30:45.199796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1010 17:30:48.194410       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 10 17:32:20 addons-594989 kubelet[1334]: I1010 17:32:20.520753    1334 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-b5h8w" secret="" err="secret \"gcp-auth\" not found"
	Oct 10 17:32:20 addons-594989 kubelet[1334]: I1010 17:32:20.522214    1334 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29d5fc073fb47649d38bd754f730eb790953d836d2a9fa67627f992251b88d87"
	Oct 10 17:32:20 addons-594989 kubelet[1334]: I1010 17:32:20.532198    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/amd-gpu-device-plugin-b5h8w" podStartSLOduration=2.147933222 podStartE2EDuration="47.532180372s" podCreationTimestamp="2025-10-10 17:31:33 +0000 UTC" firstStartedPulling="2025-10-10 17:31:34.289347271 +0000 UTC m=+48.148143470" lastFinishedPulling="2025-10-10 17:32:19.673594425 +0000 UTC m=+93.532390620" observedRunningTime="2025-10-10 17:32:20.530867452 +0000 UTC m=+94.389663657" watchObservedRunningTime="2025-10-10 17:32:20.532180372 +0000 UTC m=+94.390976577"
	Oct 10 17:32:20 addons-594989 kubelet[1334]: I1010 17:32:20.647497    1334 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crmrc\" (UniqueName: \"kubernetes.io/projected/144b3618-7db5-4d14-b80d-32c108160adc-kube-api-access-crmrc\") pod \"144b3618-7db5-4d14-b80d-32c108160adc\" (UID: \"144b3618-7db5-4d14-b80d-32c108160adc\") "
	Oct 10 17:32:20 addons-594989 kubelet[1334]: I1010 17:32:20.649589    1334 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/144b3618-7db5-4d14-b80d-32c108160adc-kube-api-access-crmrc" (OuterVolumeSpecName: "kube-api-access-crmrc") pod "144b3618-7db5-4d14-b80d-32c108160adc" (UID: "144b3618-7db5-4d14-b80d-32c108160adc"). InnerVolumeSpecName "kube-api-access-crmrc". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 10 17:32:20 addons-594989 kubelet[1334]: I1010 17:32:20.748983    1334 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-crmrc\" (UniqueName: \"kubernetes.io/projected/144b3618-7db5-4d14-b80d-32c108160adc-kube-api-access-crmrc\") on node \"addons-594989\" DevicePath \"\""
	Oct 10 17:32:21 addons-594989 kubelet[1334]: I1010 17:32:21.527993    1334 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7eb0e5c33546f0f4912dde7b4981900061d57a8c32c5a29af898df4ffb2047a"
	Oct 10 17:32:21 addons-594989 kubelet[1334]: I1010 17:32:21.528185    1334 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-b5h8w" secret="" err="secret \"gcp-auth\" not found"
	Oct 10 17:32:23 addons-594989 kubelet[1334]: I1010 17:32:23.535502    1334 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-8mr65" secret="" err="secret \"gcp-auth\" not found"
	Oct 10 17:32:24 addons-594989 kubelet[1334]: I1010 17:32:24.538687    1334 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-8mr65" secret="" err="secret \"gcp-auth\" not found"
	Oct 10 17:32:25 addons-594989 kubelet[1334]: I1010 17:32:25.555664    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-8mr65" podStartSLOduration=4.275374631 podStartE2EDuration="52.555645367s" podCreationTimestamp="2025-10-10 17:31:33 +0000 UTC" firstStartedPulling="2025-10-10 17:31:34.363735951 +0000 UTC m=+48.222532137" lastFinishedPulling="2025-10-10 17:32:22.644006673 +0000 UTC m=+96.502802873" observedRunningTime="2025-10-10 17:32:23.546798448 +0000 UTC m=+97.405594653" watchObservedRunningTime="2025-10-10 17:32:25.555645367 +0000 UTC m=+99.414441573"
	Oct 10 17:32:25 addons-594989 kubelet[1334]: I1010 17:32:25.555880    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-cntr6" podStartSLOduration=67.463499082 podStartE2EDuration="1m32.555871285s" podCreationTimestamp="2025-10-10 17:30:53 +0000 UTC" firstStartedPulling="2025-10-10 17:32:00.203777928 +0000 UTC m=+74.062574113" lastFinishedPulling="2025-10-10 17:32:25.296150129 +0000 UTC m=+99.154946316" observedRunningTime="2025-10-10 17:32:25.554326992 +0000 UTC m=+99.413123197" watchObservedRunningTime="2025-10-10 17:32:25.555871285 +0000 UTC m=+99.414667492"
	Oct 10 17:32:28 addons-594989 kubelet[1334]: I1010 17:32:28.565387    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-nq7rp" podStartSLOduration=66.343640686 podStartE2EDuration="1m28.565367042s" podCreationTimestamp="2025-10-10 17:31:00 +0000 UTC" firstStartedPulling="2025-10-10 17:32:06.02534522 +0000 UTC m=+79.884141407" lastFinishedPulling="2025-10-10 17:32:28.247071568 +0000 UTC m=+102.105867763" observedRunningTime="2025-10-10 17:32:28.56404099 +0000 UTC m=+102.422837195" watchObservedRunningTime="2025-10-10 17:32:28.565367042 +0000 UTC m=+102.424163248"
	Oct 10 17:32:31 addons-594989 kubelet[1334]: I1010 17:32:31.288452    1334 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 10 17:32:31 addons-594989 kubelet[1334]: I1010 17:32:31.288492    1334 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 10 17:32:33 addons-594989 kubelet[1334]: I1010 17:32:33.597866    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-4g74f" podStartSLOduration=2.193927415 podStartE2EDuration="1m0.597844662s" podCreationTimestamp="2025-10-10 17:31:33 +0000 UTC" firstStartedPulling="2025-10-10 17:31:34.288741205 +0000 UTC m=+48.147537401" lastFinishedPulling="2025-10-10 17:32:32.692658464 +0000 UTC m=+106.551454648" observedRunningTime="2025-10-10 17:32:33.596624222 +0000 UTC m=+107.455420438" watchObservedRunningTime="2025-10-10 17:32:33.597844662 +0000 UTC m=+107.456640868"
	Oct 10 17:32:37 addons-594989 kubelet[1334]: E1010 17:32:37.779620    1334 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 10 17:32:37 addons-594989 kubelet[1334]: E1010 17:32:37.779712    1334 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1404742-2d86-4ac9-91f6-3d70ff795aa1-gcr-creds podName:b1404742-2d86-4ac9-91f6-3d70ff795aa1 nodeName:}" failed. No retries permitted until 2025-10-10 17:33:41.779691431 +0000 UTC m=+175.638487634 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/b1404742-2d86-4ac9-91f6-3d70ff795aa1-gcr-creds") pod "registry-creds-764b6fb674-5k497" (UID: "b1404742-2d86-4ac9-91f6-3d70ff795aa1") : secret "registry-creds-gcr" not found
	Oct 10 17:32:38 addons-594989 kubelet[1334]: I1010 17:32:38.224030    1334 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ab506da-913d-4336-83f4-651006446d98" path="/var/lib/kubelet/pods/8ab506da-913d-4336-83f4-651006446d98/volumes"
	Oct 10 17:32:41 addons-594989 kubelet[1334]: I1010 17:32:41.627807    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-szmc7" podStartSLOduration=105.225105446 podStartE2EDuration="1m48.627789221s" podCreationTimestamp="2025-10-10 17:30:53 +0000 UTC" firstStartedPulling="2025-10-10 17:32:38.114194294 +0000 UTC m=+111.972990491" lastFinishedPulling="2025-10-10 17:32:41.516878083 +0000 UTC m=+115.375674266" observedRunningTime="2025-10-10 17:32:41.62670218 +0000 UTC m=+115.485498385" watchObservedRunningTime="2025-10-10 17:32:41.627789221 +0000 UTC m=+115.486585426"
	Oct 10 17:32:44 addons-594989 kubelet[1334]: I1010 17:32:44.229477    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b556e7f6-46e6-40e5-9826-18498598bc80-gcp-creds\") pod \"busybox\" (UID: \"b556e7f6-46e6-40e5-9826-18498598bc80\") " pod="default/busybox"
	Oct 10 17:32:44 addons-594989 kubelet[1334]: I1010 17:32:44.229522    1334 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jftbj\" (UniqueName: \"kubernetes.io/projected/b556e7f6-46e6-40e5-9826-18498598bc80-kube-api-access-jftbj\") pod \"busybox\" (UID: \"b556e7f6-46e6-40e5-9826-18498598bc80\") " pod="default/busybox"
	Oct 10 17:32:46 addons-594989 kubelet[1334]: I1010 17:32:46.212934    1334 scope.go:117] "RemoveContainer" containerID="5e0c76cba254296d25acc0abee46681670b9aebdce72e5878932d927a09ed43c"
	Oct 10 17:32:47 addons-594989 kubelet[1334]: I1010 17:32:47.653680    1334 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.780082198 podStartE2EDuration="3.653660672s" podCreationTimestamp="2025-10-10 17:32:44 +0000 UTC" firstStartedPulling="2025-10-10 17:32:44.373432902 +0000 UTC m=+118.232229086" lastFinishedPulling="2025-10-10 17:32:47.247011358 +0000 UTC m=+121.105807560" observedRunningTime="2025-10-10 17:32:47.652662416 +0000 UTC m=+121.511458621" watchObservedRunningTime="2025-10-10 17:32:47.653660672 +0000 UTC m=+121.512456878"
	Oct 10 17:32:50 addons-594989 kubelet[1334]: I1010 17:32:50.224244    1334 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3cebb67-2273-406c-a51e-4aee7e8fa866" path="/var/lib/kubelet/pods/d3cebb67-2273-406c-a51e-4aee7e8fa866/volumes"
	
	
	==> storage-provisioner [8cb0bc1946c2ed4da67ec55c8c6c99b35b9087ba2260de09913804f39b37e9aa] <==
	W1010 17:32:30.636451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:32:32.639748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:32:32.643943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:32:34.646430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:32:34.649872       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:32:36.652366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:32:36.656540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:32:38.660445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:32:38.667877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:32:40.671037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:32:40.698128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:32:42.701252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:32:42.705976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:32:44.709307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:32:44.714658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:32:46.716956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:32:46.772165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:32:48.775490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:32:48.779167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:32:50.781864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:32:50.785414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:32:52.788211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:32:52.791929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:32:54.794493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:32:54.800385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-594989 -n addons-594989
helpers_test.go:269: (dbg) Run:  kubectl --context addons-594989 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-4djcf ingress-nginx-admission-patch-vvdlx registry-creds-764b6fb674-5k497
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-594989 describe pod ingress-nginx-admission-create-4djcf ingress-nginx-admission-patch-vvdlx registry-creds-764b6fb674-5k497
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-594989 describe pod ingress-nginx-admission-create-4djcf ingress-nginx-admission-patch-vvdlx registry-creds-764b6fb674-5k497: exit status 1 (56.908498ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-4djcf" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-vvdlx" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-5k497" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-594989 describe pod ingress-nginx-admission-create-4djcf ingress-nginx-admission-patch-vvdlx registry-creds-764b6fb674-5k497: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-594989 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-594989 addons disable headlamp --alsologtostderr -v=1: exit status 11 (233.486003ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 17:32:56.828268   20231 out.go:360] Setting OutFile to fd 1 ...
	I1010 17:32:56.828415   20231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:32:56.828424   20231 out.go:374] Setting ErrFile to fd 2...
	I1010 17:32:56.828429   20231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:32:56.828632   20231 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 17:32:56.828857   20231 mustload.go:65] Loading cluster: addons-594989
	I1010 17:32:56.829174   20231 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:32:56.829187   20231 addons.go:606] checking whether the cluster is paused
	I1010 17:32:56.829262   20231 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:32:56.829274   20231 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:32:56.829666   20231 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:32:56.846989   20231 ssh_runner.go:195] Run: systemctl --version
	I1010 17:32:56.847038   20231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:32:56.866566   20231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:32:56.962833   20231 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 17:32:56.962889   20231 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 17:32:56.994185   20231 cri.go:89] found id: "5e95cdad968221cb4aa3e3f82adc548bb3d5b365829bff504b6b9205dce0e7fd"
	I1010 17:32:56.994203   20231 cri.go:89] found id: "d699fc1ff60deb831fc7ac36084436101b2b6f7f34bc49bb7395f67303eddd87"
	I1010 17:32:56.994207   20231 cri.go:89] found id: "f9378118d907d056ea9eef46a4fc61abb3f92e4f6c26d4b923dfdde2abf957d2"
	I1010 17:32:56.994209   20231 cri.go:89] found id: "678a2f9830be76f38c2b341c8e72305078d85125c2e54db3f06c6813bd0a0d9a"
	I1010 17:32:56.994212   20231 cri.go:89] found id: "ad42a3a9aced05fb033bff70ddc9ef71d5b204c2f932be8648ba95c3746e71c2"
	I1010 17:32:56.994215   20231 cri.go:89] found id: "b55d72508fae28446486a54619cff08b4afb5c385f6a5e4eac89e3cfebc91592"
	I1010 17:32:56.994218   20231 cri.go:89] found id: "901121a197604a1c522a2d4628a282334fc05ad33f3fdcb043724310be152785"
	I1010 17:32:56.994220   20231 cri.go:89] found id: "071e94df1917ca297f7279600d3626289e55027384ba88081d3c6d75e7c1e418"
	I1010 17:32:56.994223   20231 cri.go:89] found id: "c9f157c8634805afa056c3c518497da9987b920da8bf0ac132118ed4e4ef8ea9"
	I1010 17:32:56.994228   20231 cri.go:89] found id: "6031890f647ecd1229bdaba7d90fb473ac9a8831d40666fdbc09bd914ca1987a"
	I1010 17:32:56.994230   20231 cri.go:89] found id: "7325b7e01b3666ecc0095ccec9564f71a65648e8cf9ce1d9e1915c7d1eaa574a"
	I1010 17:32:56.994233   20231 cri.go:89] found id: "19825e2ee8b346bd47414fd8e5247ef52e5b2e32f3eb7196eef394c88fd2275f"
	I1010 17:32:56.994235   20231 cri.go:89] found id: "22fc52febdf0cdaac8c2a5aad2960659c8a0115782e452ba76c5328f526e478c"
	I1010 17:32:56.994237   20231 cri.go:89] found id: "b770cbeea4ac5736e7c6c1c1e37f4cf430284066d47389ba72462e3d59a6fc36"
	I1010 17:32:56.994240   20231 cri.go:89] found id: "a6f2b6c587bcccac9182b5ea634d295f575138b032fc36b618e6fd522dd3434a"
	I1010 17:32:56.994255   20231 cri.go:89] found id: "0e80700c177774e91775e01b322ba4b7c3ad23f691a7e8ef083285a034138f33"
	I1010 17:32:56.994260   20231 cri.go:89] found id: "8cab3f92e9e88acd6ccdda17457d84c2208a2db36f61d1769c76a89b89d5c06c"
	I1010 17:32:56.994265   20231 cri.go:89] found id: "8cb0bc1946c2ed4da67ec55c8c6c99b35b9087ba2260de09913804f39b37e9aa"
	I1010 17:32:56.994269   20231 cri.go:89] found id: "a664c4cd86a07ae3da31b7161b1ffcb861502990a087baf33bb177718c331505"
	I1010 17:32:56.994273   20231 cri.go:89] found id: "4f4668380d0085c1200a82058cc3e69994ce54d202cd46003f4aeb1592745336"
	I1010 17:32:56.994280   20231 cri.go:89] found id: "c1f6da858e936ea82a29d1bd82704a1271d21a8c3ef131087c8a1ffc041f909d"
	I1010 17:32:56.994284   20231 cri.go:89] found id: "03911015ab5c009ca66dea098febf11b6b587cc8665f0cf85bf894a7d24caf04"
	I1010 17:32:56.994289   20231 cri.go:89] found id: "8643869dd690c538be7e9ae88ed91a5133d80d777f7f05864080e0071de6ce07"
	I1010 17:32:56.994293   20231 cri.go:89] found id: "426cb7351d8b7ffa8dc04159ce227020bab5b313130d3d7ea54e381c5e1ff403"
	I1010 17:32:56.994297   20231 cri.go:89] found id: ""
	I1010 17:32:56.994334   20231 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 17:32:57.009389   20231 out.go:203] 
	W1010 17:32:57.010581   20231 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:32:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:32:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1010 17:32:57.010604   20231 out.go:285] * 
	* 
	W1010 17:32:57.015925   20231 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 17:32:57.017180   20231 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-594989 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.45s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-55bl8" [7ae98df2-3c0b-4ce0-94af-0e94849b6129] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.00299319s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-594989 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-594989 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (228.725723ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 17:33:04.909879   20867 out.go:360] Setting OutFile to fd 1 ...
	I1010 17:33:04.910191   20867 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:33:04.910202   20867 out.go:374] Setting ErrFile to fd 2...
	I1010 17:33:04.910213   20867 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:33:04.910384   20867 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 17:33:04.910618   20867 mustload.go:65] Loading cluster: addons-594989
	I1010 17:33:04.910926   20867 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:33:04.910940   20867 addons.go:606] checking whether the cluster is paused
	I1010 17:33:04.911019   20867 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:33:04.911031   20867 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:33:04.911415   20867 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:33:04.929956   20867 ssh_runner.go:195] Run: systemctl --version
	I1010 17:33:04.930012   20867 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:33:04.947147   20867 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:33:05.042536   20867 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 17:33:05.042607   20867 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 17:33:05.072432   20867 cri.go:89] found id: "5e95cdad968221cb4aa3e3f82adc548bb3d5b365829bff504b6b9205dce0e7fd"
	I1010 17:33:05.072453   20867 cri.go:89] found id: "d699fc1ff60deb831fc7ac36084436101b2b6f7f34bc49bb7395f67303eddd87"
	I1010 17:33:05.072459   20867 cri.go:89] found id: "f9378118d907d056ea9eef46a4fc61abb3f92e4f6c26d4b923dfdde2abf957d2"
	I1010 17:33:05.072463   20867 cri.go:89] found id: "678a2f9830be76f38c2b341c8e72305078d85125c2e54db3f06c6813bd0a0d9a"
	I1010 17:33:05.072467   20867 cri.go:89] found id: "ad42a3a9aced05fb033bff70ddc9ef71d5b204c2f932be8648ba95c3746e71c2"
	I1010 17:33:05.072472   20867 cri.go:89] found id: "b55d72508fae28446486a54619cff08b4afb5c385f6a5e4eac89e3cfebc91592"
	I1010 17:33:05.072476   20867 cri.go:89] found id: "901121a197604a1c522a2d4628a282334fc05ad33f3fdcb043724310be152785"
	I1010 17:33:05.072480   20867 cri.go:89] found id: "071e94df1917ca297f7279600d3626289e55027384ba88081d3c6d75e7c1e418"
	I1010 17:33:05.072484   20867 cri.go:89] found id: "c9f157c8634805afa056c3c518497da9987b920da8bf0ac132118ed4e4ef8ea9"
	I1010 17:33:05.072490   20867 cri.go:89] found id: "6031890f647ecd1229bdaba7d90fb473ac9a8831d40666fdbc09bd914ca1987a"
	I1010 17:33:05.072494   20867 cri.go:89] found id: "7325b7e01b3666ecc0095ccec9564f71a65648e8cf9ce1d9e1915c7d1eaa574a"
	I1010 17:33:05.072506   20867 cri.go:89] found id: "19825e2ee8b346bd47414fd8e5247ef52e5b2e32f3eb7196eef394c88fd2275f"
	I1010 17:33:05.072513   20867 cri.go:89] found id: "22fc52febdf0cdaac8c2a5aad2960659c8a0115782e452ba76c5328f526e478c"
	I1010 17:33:05.072517   20867 cri.go:89] found id: "b770cbeea4ac5736e7c6c1c1e37f4cf430284066d47389ba72462e3d59a6fc36"
	I1010 17:33:05.072524   20867 cri.go:89] found id: "a6f2b6c587bcccac9182b5ea634d295f575138b032fc36b618e6fd522dd3434a"
	I1010 17:33:05.072530   20867 cri.go:89] found id: "0e80700c177774e91775e01b322ba4b7c3ad23f691a7e8ef083285a034138f33"
	I1010 17:33:05.072537   20867 cri.go:89] found id: "8cab3f92e9e88acd6ccdda17457d84c2208a2db36f61d1769c76a89b89d5c06c"
	I1010 17:33:05.072548   20867 cri.go:89] found id: "8cb0bc1946c2ed4da67ec55c8c6c99b35b9087ba2260de09913804f39b37e9aa"
	I1010 17:33:05.072552   20867 cri.go:89] found id: "a664c4cd86a07ae3da31b7161b1ffcb861502990a087baf33bb177718c331505"
	I1010 17:33:05.072559   20867 cri.go:89] found id: "4f4668380d0085c1200a82058cc3e69994ce54d202cd46003f4aeb1592745336"
	I1010 17:33:05.072564   20867 cri.go:89] found id: "c1f6da858e936ea82a29d1bd82704a1271d21a8c3ef131087c8a1ffc041f909d"
	I1010 17:33:05.072571   20867 cri.go:89] found id: "03911015ab5c009ca66dea098febf11b6b587cc8665f0cf85bf894a7d24caf04"
	I1010 17:33:05.072575   20867 cri.go:89] found id: "8643869dd690c538be7e9ae88ed91a5133d80d777f7f05864080e0071de6ce07"
	I1010 17:33:05.072582   20867 cri.go:89] found id: "426cb7351d8b7ffa8dc04159ce227020bab5b313130d3d7ea54e381c5e1ff403"
	I1010 17:33:05.072586   20867 cri.go:89] found id: ""
	I1010 17:33:05.072627   20867 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 17:33:05.087605   20867 out.go:203] 
	W1010 17:33:05.088703   20867 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:33:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:33:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1010 17:33:05.088722   20867 out.go:285] * 
	* 
	W1010 17:33:05.091696   20867 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 17:33:05.092810   20867 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-594989 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (13.14s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-594989 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-594989 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-594989 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-594989 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-594989 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-594989 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-594989 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-594989 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-594989 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [35328e4f-2a94-46b9-9e3c-b238263af678] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [35328e4f-2a94-46b9-9e3c-b238263af678] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [35328e4f-2a94-46b9-9e3c-b238263af678] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.002645251s
addons_test.go:967: (dbg) Run:  kubectl --context addons-594989 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-594989 ssh "cat /opt/local-path-provisioner/pvc-f303975d-b3e2-4cbe-8760-b97c507c5465_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-594989 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-594989 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-594989 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-594989 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (230.029982ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 17:33:13.807268   22390 out.go:360] Setting OutFile to fd 1 ...
	I1010 17:33:13.807541   22390 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:33:13.807550   22390 out.go:374] Setting ErrFile to fd 2...
	I1010 17:33:13.807554   22390 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:33:13.807757   22390 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 17:33:13.808185   22390 mustload.go:65] Loading cluster: addons-594989
	I1010 17:33:13.808609   22390 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:33:13.808628   22390 addons.go:606] checking whether the cluster is paused
	I1010 17:33:13.808712   22390 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:33:13.808723   22390 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:33:13.809104   22390 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:33:13.827158   22390 ssh_runner.go:195] Run: systemctl --version
	I1010 17:33:13.827200   22390 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:33:13.844355   22390 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:33:13.939592   22390 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 17:33:13.939725   22390 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 17:33:13.968674   22390 cri.go:89] found id: "5e95cdad968221cb4aa3e3f82adc548bb3d5b365829bff504b6b9205dce0e7fd"
	I1010 17:33:13.968692   22390 cri.go:89] found id: "d699fc1ff60deb831fc7ac36084436101b2b6f7f34bc49bb7395f67303eddd87"
	I1010 17:33:13.968696   22390 cri.go:89] found id: "f9378118d907d056ea9eef46a4fc61abb3f92e4f6c26d4b923dfdde2abf957d2"
	I1010 17:33:13.968699   22390 cri.go:89] found id: "678a2f9830be76f38c2b341c8e72305078d85125c2e54db3f06c6813bd0a0d9a"
	I1010 17:33:13.968701   22390 cri.go:89] found id: "ad42a3a9aced05fb033bff70ddc9ef71d5b204c2f932be8648ba95c3746e71c2"
	I1010 17:33:13.968704   22390 cri.go:89] found id: "b55d72508fae28446486a54619cff08b4afb5c385f6a5e4eac89e3cfebc91592"
	I1010 17:33:13.968707   22390 cri.go:89] found id: "901121a197604a1c522a2d4628a282334fc05ad33f3fdcb043724310be152785"
	I1010 17:33:13.968709   22390 cri.go:89] found id: "071e94df1917ca297f7279600d3626289e55027384ba88081d3c6d75e7c1e418"
	I1010 17:33:13.968711   22390 cri.go:89] found id: "c9f157c8634805afa056c3c518497da9987b920da8bf0ac132118ed4e4ef8ea9"
	I1010 17:33:13.968716   22390 cri.go:89] found id: "6031890f647ecd1229bdaba7d90fb473ac9a8831d40666fdbc09bd914ca1987a"
	I1010 17:33:13.968719   22390 cri.go:89] found id: "7325b7e01b3666ecc0095ccec9564f71a65648e8cf9ce1d9e1915c7d1eaa574a"
	I1010 17:33:13.968721   22390 cri.go:89] found id: "19825e2ee8b346bd47414fd8e5247ef52e5b2e32f3eb7196eef394c88fd2275f"
	I1010 17:33:13.968723   22390 cri.go:89] found id: "22fc52febdf0cdaac8c2a5aad2960659c8a0115782e452ba76c5328f526e478c"
	I1010 17:33:13.968726   22390 cri.go:89] found id: "b770cbeea4ac5736e7c6c1c1e37f4cf430284066d47389ba72462e3d59a6fc36"
	I1010 17:33:13.968728   22390 cri.go:89] found id: "a6f2b6c587bcccac9182b5ea634d295f575138b032fc36b618e6fd522dd3434a"
	I1010 17:33:13.968735   22390 cri.go:89] found id: "0e80700c177774e91775e01b322ba4b7c3ad23f691a7e8ef083285a034138f33"
	I1010 17:33:13.968744   22390 cri.go:89] found id: "8cab3f92e9e88acd6ccdda17457d84c2208a2db36f61d1769c76a89b89d5c06c"
	I1010 17:33:13.968748   22390 cri.go:89] found id: "8cb0bc1946c2ed4da67ec55c8c6c99b35b9087ba2260de09913804f39b37e9aa"
	I1010 17:33:13.968751   22390 cri.go:89] found id: "a664c4cd86a07ae3da31b7161b1ffcb861502990a087baf33bb177718c331505"
	I1010 17:33:13.968753   22390 cri.go:89] found id: "4f4668380d0085c1200a82058cc3e69994ce54d202cd46003f4aeb1592745336"
	I1010 17:33:13.968755   22390 cri.go:89] found id: "c1f6da858e936ea82a29d1bd82704a1271d21a8c3ef131087c8a1ffc041f909d"
	I1010 17:33:13.968758   22390 cri.go:89] found id: "03911015ab5c009ca66dea098febf11b6b587cc8665f0cf85bf894a7d24caf04"
	I1010 17:33:13.968760   22390 cri.go:89] found id: "8643869dd690c538be7e9ae88ed91a5133d80d777f7f05864080e0071de6ce07"
	I1010 17:33:13.968762   22390 cri.go:89] found id: "426cb7351d8b7ffa8dc04159ce227020bab5b313130d3d7ea54e381c5e1ff403"
	I1010 17:33:13.968764   22390 cri.go:89] found id: ""
	I1010 17:33:13.968801   22390 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 17:33:13.983956   22390 out.go:203] 
	W1010 17:33:13.984989   22390 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:33:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:33:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1010 17:33:13.985008   22390 out.go:285] * 
	* 
	W1010 17:33:13.988829   22390 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 17:33:13.990249   22390 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-594989 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (13.14s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-dlkfx" [a5f0e8a4-7957-414f-887d-b3cddab72a1e] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003128507s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-594989 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-594989 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (279.051179ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 17:33:00.630293   20483 out.go:360] Setting OutFile to fd 1 ...
	I1010 17:33:00.630620   20483 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:33:00.630633   20483 out.go:374] Setting ErrFile to fd 2...
	I1010 17:33:00.630639   20483 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:33:00.630934   20483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 17:33:00.631291   20483 mustload.go:65] Loading cluster: addons-594989
	I1010 17:33:00.631793   20483 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:33:00.631811   20483 addons.go:606] checking whether the cluster is paused
	I1010 17:33:00.631934   20483 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:33:00.631951   20483 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:33:00.632515   20483 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:33:00.656112   20483 ssh_runner.go:195] Run: systemctl --version
	I1010 17:33:00.656177   20483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:33:00.678720   20483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:33:00.785367   20483 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 17:33:00.785453   20483 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 17:33:00.821834   20483 cri.go:89] found id: "5e95cdad968221cb4aa3e3f82adc548bb3d5b365829bff504b6b9205dce0e7fd"
	I1010 17:33:00.821885   20483 cri.go:89] found id: "d699fc1ff60deb831fc7ac36084436101b2b6f7f34bc49bb7395f67303eddd87"
	I1010 17:33:00.821892   20483 cri.go:89] found id: "f9378118d907d056ea9eef46a4fc61abb3f92e4f6c26d4b923dfdde2abf957d2"
	I1010 17:33:00.821896   20483 cri.go:89] found id: "678a2f9830be76f38c2b341c8e72305078d85125c2e54db3f06c6813bd0a0d9a"
	I1010 17:33:00.821900   20483 cri.go:89] found id: "ad42a3a9aced05fb033bff70ddc9ef71d5b204c2f932be8648ba95c3746e71c2"
	I1010 17:33:00.821906   20483 cri.go:89] found id: "b55d72508fae28446486a54619cff08b4afb5c385f6a5e4eac89e3cfebc91592"
	I1010 17:33:00.821910   20483 cri.go:89] found id: "901121a197604a1c522a2d4628a282334fc05ad33f3fdcb043724310be152785"
	I1010 17:33:00.821914   20483 cri.go:89] found id: "071e94df1917ca297f7279600d3626289e55027384ba88081d3c6d75e7c1e418"
	I1010 17:33:00.821917   20483 cri.go:89] found id: "c9f157c8634805afa056c3c518497da9987b920da8bf0ac132118ed4e4ef8ea9"
	I1010 17:33:00.821931   20483 cri.go:89] found id: "6031890f647ecd1229bdaba7d90fb473ac9a8831d40666fdbc09bd914ca1987a"
	I1010 17:33:00.821941   20483 cri.go:89] found id: "7325b7e01b3666ecc0095ccec9564f71a65648e8cf9ce1d9e1915c7d1eaa574a"
	I1010 17:33:00.821945   20483 cri.go:89] found id: "19825e2ee8b346bd47414fd8e5247ef52e5b2e32f3eb7196eef394c88fd2275f"
	I1010 17:33:00.821948   20483 cri.go:89] found id: "22fc52febdf0cdaac8c2a5aad2960659c8a0115782e452ba76c5328f526e478c"
	I1010 17:33:00.821952   20483 cri.go:89] found id: "b770cbeea4ac5736e7c6c1c1e37f4cf430284066d47389ba72462e3d59a6fc36"
	I1010 17:33:00.821955   20483 cri.go:89] found id: "a6f2b6c587bcccac9182b5ea634d295f575138b032fc36b618e6fd522dd3434a"
	I1010 17:33:00.821964   20483 cri.go:89] found id: "0e80700c177774e91775e01b322ba4b7c3ad23f691a7e8ef083285a034138f33"
	I1010 17:33:00.821968   20483 cri.go:89] found id: "8cab3f92e9e88acd6ccdda17457d84c2208a2db36f61d1769c76a89b89d5c06c"
	I1010 17:33:00.821973   20483 cri.go:89] found id: "8cb0bc1946c2ed4da67ec55c8c6c99b35b9087ba2260de09913804f39b37e9aa"
	I1010 17:33:00.821977   20483 cri.go:89] found id: "a664c4cd86a07ae3da31b7161b1ffcb861502990a087baf33bb177718c331505"
	I1010 17:33:00.821980   20483 cri.go:89] found id: "4f4668380d0085c1200a82058cc3e69994ce54d202cd46003f4aeb1592745336"
	I1010 17:33:00.821984   20483 cri.go:89] found id: "c1f6da858e936ea82a29d1bd82704a1271d21a8c3ef131087c8a1ffc041f909d"
	I1010 17:33:00.821987   20483 cri.go:89] found id: "03911015ab5c009ca66dea098febf11b6b587cc8665f0cf85bf894a7d24caf04"
	I1010 17:33:00.821991   20483 cri.go:89] found id: "8643869dd690c538be7e9ae88ed91a5133d80d777f7f05864080e0071de6ce07"
	I1010 17:33:00.821996   20483 cri.go:89] found id: "426cb7351d8b7ffa8dc04159ce227020bab5b313130d3d7ea54e381c5e1ff403"
	I1010 17:33:00.822012   20483 cri.go:89] found id: ""
	I1010 17:33:00.822088   20483 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 17:33:00.841852   20483 out.go:203] 
	W1010 17:33:00.843154   20483 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:33:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:33:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1010 17:33:00.843178   20483 out.go:285] * 
	* 
	W1010 17:33:00.848625   20483 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 17:33:00.850031   20483 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-594989 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.28s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-xsjmw" [81db2f93-c26b-4ca8-9de4-8c9a5c58e8d0] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.002834474s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-594989 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-594989 addons disable yakd --alsologtostderr -v=1: exit status 11 (232.085013ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 17:33:20.835986   22931 out.go:360] Setting OutFile to fd 1 ...
	I1010 17:33:20.836268   22931 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:33:20.836277   22931 out.go:374] Setting ErrFile to fd 2...
	I1010 17:33:20.836281   22931 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:33:20.836470   22931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 17:33:20.836723   22931 mustload.go:65] Loading cluster: addons-594989
	I1010 17:33:20.837077   22931 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:33:20.837091   22931 addons.go:606] checking whether the cluster is paused
	I1010 17:33:20.837170   22931 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:33:20.837182   22931 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:33:20.837514   22931 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:33:20.854382   22931 ssh_runner.go:195] Run: systemctl --version
	I1010 17:33:20.854420   22931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:33:20.871516   22931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:33:20.967989   22931 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 17:33:20.968084   22931 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 17:33:21.003833   22931 cri.go:89] found id: "5e95cdad968221cb4aa3e3f82adc548bb3d5b365829bff504b6b9205dce0e7fd"
	I1010 17:33:21.003861   22931 cri.go:89] found id: "d699fc1ff60deb831fc7ac36084436101b2b6f7f34bc49bb7395f67303eddd87"
	I1010 17:33:21.003866   22931 cri.go:89] found id: "f9378118d907d056ea9eef46a4fc61abb3f92e4f6c26d4b923dfdde2abf957d2"
	I1010 17:33:21.003870   22931 cri.go:89] found id: "678a2f9830be76f38c2b341c8e72305078d85125c2e54db3f06c6813bd0a0d9a"
	I1010 17:33:21.003875   22931 cri.go:89] found id: "ad42a3a9aced05fb033bff70ddc9ef71d5b204c2f932be8648ba95c3746e71c2"
	I1010 17:33:21.003881   22931 cri.go:89] found id: "b55d72508fae28446486a54619cff08b4afb5c385f6a5e4eac89e3cfebc91592"
	I1010 17:33:21.003886   22931 cri.go:89] found id: "901121a197604a1c522a2d4628a282334fc05ad33f3fdcb043724310be152785"
	I1010 17:33:21.003891   22931 cri.go:89] found id: "071e94df1917ca297f7279600d3626289e55027384ba88081d3c6d75e7c1e418"
	I1010 17:33:21.003895   22931 cri.go:89] found id: "c9f157c8634805afa056c3c518497da9987b920da8bf0ac132118ed4e4ef8ea9"
	I1010 17:33:21.003909   22931 cri.go:89] found id: "6031890f647ecd1229bdaba7d90fb473ac9a8831d40666fdbc09bd914ca1987a"
	I1010 17:33:21.003916   22931 cri.go:89] found id: "7325b7e01b3666ecc0095ccec9564f71a65648e8cf9ce1d9e1915c7d1eaa574a"
	I1010 17:33:21.003920   22931 cri.go:89] found id: "19825e2ee8b346bd47414fd8e5247ef52e5b2e32f3eb7196eef394c88fd2275f"
	I1010 17:33:21.003928   22931 cri.go:89] found id: "22fc52febdf0cdaac8c2a5aad2960659c8a0115782e452ba76c5328f526e478c"
	I1010 17:33:21.003932   22931 cri.go:89] found id: "b770cbeea4ac5736e7c6c1c1e37f4cf430284066d47389ba72462e3d59a6fc36"
	I1010 17:33:21.003938   22931 cri.go:89] found id: "a6f2b6c587bcccac9182b5ea634d295f575138b032fc36b618e6fd522dd3434a"
	I1010 17:33:21.003948   22931 cri.go:89] found id: "0e80700c177774e91775e01b322ba4b7c3ad23f691a7e8ef083285a034138f33"
	I1010 17:33:21.003953   22931 cri.go:89] found id: "8cab3f92e9e88acd6ccdda17457d84c2208a2db36f61d1769c76a89b89d5c06c"
	I1010 17:33:21.003965   22931 cri.go:89] found id: "8cb0bc1946c2ed4da67ec55c8c6c99b35b9087ba2260de09913804f39b37e9aa"
	I1010 17:33:21.003970   22931 cri.go:89] found id: "a664c4cd86a07ae3da31b7161b1ffcb861502990a087baf33bb177718c331505"
	I1010 17:33:21.003974   22931 cri.go:89] found id: "4f4668380d0085c1200a82058cc3e69994ce54d202cd46003f4aeb1592745336"
	I1010 17:33:21.003980   22931 cri.go:89] found id: "c1f6da858e936ea82a29d1bd82704a1271d21a8c3ef131087c8a1ffc041f909d"
	I1010 17:33:21.003984   22931 cri.go:89] found id: "03911015ab5c009ca66dea098febf11b6b587cc8665f0cf85bf894a7d24caf04"
	I1010 17:33:21.003991   22931 cri.go:89] found id: "8643869dd690c538be7e9ae88ed91a5133d80d777f7f05864080e0071de6ce07"
	I1010 17:33:21.003995   22931 cri.go:89] found id: "426cb7351d8b7ffa8dc04159ce227020bab5b313130d3d7ea54e381c5e1ff403"
	I1010 17:33:21.004000   22931 cri.go:89] found id: ""
	I1010 17:33:21.004047   22931 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 17:33:21.018835   22931 out.go:203] 
	W1010 17:33:21.019749   22931 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:33:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:33:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1010 17:33:21.019769   22931 out.go:285] * 
	* 
	W1010 17:33:21.022900   22931 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 17:33:21.023951   22931 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-594989 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.24s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.23s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-b5h8w" [fb6102b5-8464-4645-901a-f2f471fa6e63] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003257611s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-594989 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-594989 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (225.87807ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 17:33:20.039400   22857 out.go:360] Setting OutFile to fd 1 ...
	I1010 17:33:20.039672   22857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:33:20.039681   22857 out.go:374] Setting ErrFile to fd 2...
	I1010 17:33:20.039685   22857 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:33:20.039869   22857 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 17:33:20.040123   22857 mustload.go:65] Loading cluster: addons-594989
	I1010 17:33:20.040453   22857 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:33:20.040467   22857 addons.go:606] checking whether the cluster is paused
	I1010 17:33:20.040541   22857 config.go:182] Loaded profile config "addons-594989": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:33:20.040551   22857 host.go:66] Checking if "addons-594989" exists ...
	I1010 17:33:20.040884   22857 cli_runner.go:164] Run: docker container inspect addons-594989 --format={{.State.Status}}
	I1010 17:33:20.058844   22857 ssh_runner.go:195] Run: systemctl --version
	I1010 17:33:20.058907   22857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-594989
	I1010 17:33:20.077008   22857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/addons-594989/id_rsa Username:docker}
	I1010 17:33:20.173358   22857 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 17:33:20.173451   22857 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 17:33:20.202379   22857 cri.go:89] found id: "5e95cdad968221cb4aa3e3f82adc548bb3d5b365829bff504b6b9205dce0e7fd"
	I1010 17:33:20.202401   22857 cri.go:89] found id: "d699fc1ff60deb831fc7ac36084436101b2b6f7f34bc49bb7395f67303eddd87"
	I1010 17:33:20.202404   22857 cri.go:89] found id: "f9378118d907d056ea9eef46a4fc61abb3f92e4f6c26d4b923dfdde2abf957d2"
	I1010 17:33:20.202407   22857 cri.go:89] found id: "678a2f9830be76f38c2b341c8e72305078d85125c2e54db3f06c6813bd0a0d9a"
	I1010 17:33:20.202410   22857 cri.go:89] found id: "ad42a3a9aced05fb033bff70ddc9ef71d5b204c2f932be8648ba95c3746e71c2"
	I1010 17:33:20.202414   22857 cri.go:89] found id: "b55d72508fae28446486a54619cff08b4afb5c385f6a5e4eac89e3cfebc91592"
	I1010 17:33:20.202417   22857 cri.go:89] found id: "901121a197604a1c522a2d4628a282334fc05ad33f3fdcb043724310be152785"
	I1010 17:33:20.202419   22857 cri.go:89] found id: "071e94df1917ca297f7279600d3626289e55027384ba88081d3c6d75e7c1e418"
	I1010 17:33:20.202422   22857 cri.go:89] found id: "c9f157c8634805afa056c3c518497da9987b920da8bf0ac132118ed4e4ef8ea9"
	I1010 17:33:20.202427   22857 cri.go:89] found id: "6031890f647ecd1229bdaba7d90fb473ac9a8831d40666fdbc09bd914ca1987a"
	I1010 17:33:20.202430   22857 cri.go:89] found id: "7325b7e01b3666ecc0095ccec9564f71a65648e8cf9ce1d9e1915c7d1eaa574a"
	I1010 17:33:20.202433   22857 cri.go:89] found id: "19825e2ee8b346bd47414fd8e5247ef52e5b2e32f3eb7196eef394c88fd2275f"
	I1010 17:33:20.202436   22857 cri.go:89] found id: "22fc52febdf0cdaac8c2a5aad2960659c8a0115782e452ba76c5328f526e478c"
	I1010 17:33:20.202438   22857 cri.go:89] found id: "b770cbeea4ac5736e7c6c1c1e37f4cf430284066d47389ba72462e3d59a6fc36"
	I1010 17:33:20.202441   22857 cri.go:89] found id: "a6f2b6c587bcccac9182b5ea634d295f575138b032fc36b618e6fd522dd3434a"
	I1010 17:33:20.202446   22857 cri.go:89] found id: "0e80700c177774e91775e01b322ba4b7c3ad23f691a7e8ef083285a034138f33"
	I1010 17:33:20.202448   22857 cri.go:89] found id: "8cab3f92e9e88acd6ccdda17457d84c2208a2db36f61d1769c76a89b89d5c06c"
	I1010 17:33:20.202452   22857 cri.go:89] found id: "8cb0bc1946c2ed4da67ec55c8c6c99b35b9087ba2260de09913804f39b37e9aa"
	I1010 17:33:20.202454   22857 cri.go:89] found id: "a664c4cd86a07ae3da31b7161b1ffcb861502990a087baf33bb177718c331505"
	I1010 17:33:20.202456   22857 cri.go:89] found id: "4f4668380d0085c1200a82058cc3e69994ce54d202cd46003f4aeb1592745336"
	I1010 17:33:20.202459   22857 cri.go:89] found id: "c1f6da858e936ea82a29d1bd82704a1271d21a8c3ef131087c8a1ffc041f909d"
	I1010 17:33:20.202461   22857 cri.go:89] found id: "03911015ab5c009ca66dea098febf11b6b587cc8665f0cf85bf894a7d24caf04"
	I1010 17:33:20.202463   22857 cri.go:89] found id: "8643869dd690c538be7e9ae88ed91a5133d80d777f7f05864080e0071de6ce07"
	I1010 17:33:20.202466   22857 cri.go:89] found id: "426cb7351d8b7ffa8dc04159ce227020bab5b313130d3d7ea54e381c5e1ff403"
	I1010 17:33:20.202468   22857 cri.go:89] found id: ""
	I1010 17:33:20.202510   22857 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 17:33:20.216330   22857 out.go:203] 
	W1010 17:33:20.217312   22857 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:33:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:33:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1010 17:33:20.217327   22857 out.go:285] * 
	* 
	W1010 17:33:20.220326   22857 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 17:33:20.221417   22857 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-594989 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (6.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-728643 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-728643 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-nlq8q" [076b7f61-418e-4e38-ab3b-c3b50ec7e9d1] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-728643 -n functional-728643
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-10 17:49:44.732789193 +0000 UTC m=+1235.915095521
functional_test.go:1645: (dbg) Run:  kubectl --context functional-728643 describe po hello-node-connect-7d85dfc575-nlq8q -n default
functional_test.go:1645: (dbg) kubectl --context functional-728643 describe po hello-node-connect-7d85dfc575-nlq8q -n default:
Name:             hello-node-connect-7d85dfc575-nlq8q
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-728643/192.168.49.2
Start Time:       Fri, 10 Oct 2025 17:39:44 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kz98q (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-kz98q:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-nlq8q to functional-728643
Normal   Pulling    7m1s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m1s (x5 over 9m57s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m1s (x5 over 9m57s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m53s (x21 over 9m56s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m53s (x21 over 9m56s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-728643 logs hello-node-connect-7d85dfc575-nlq8q -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-728643 logs hello-node-connect-7d85dfc575-nlq8q -n default: exit status 1 (60.644807ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-nlq8q" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-728643 logs hello-node-connect-7d85dfc575-nlq8q -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-728643 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-nlq8q
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-728643/192.168.49.2
Start Time:       Fri, 10 Oct 2025 17:39:44 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kz98q (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-kz98q:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-nlq8q to functional-728643
Normal   Pulling    7m1s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m1s (x5 over 9m57s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m1s (x5 over 9m57s)    kubelet            Error: ErrImagePull
Warning  Failed     4m53s (x21 over 9m56s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    0s (x43 over 9m56s)     kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-728643 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-728643 logs -l app=hello-node-connect: exit status 1 (97.961546ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-nlq8q" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-728643 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-728643 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.104.254.19
IPs:                      10.104.254.19
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30677/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-728643
helpers_test.go:243: (dbg) docker inspect functional-728643:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7970e6365a84accd0e35f708e7caf4441d8dbc1bb442c3896edc190043df0968",
	        "Created": "2025-10-10T17:36:41.414775769Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33373,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-10T17:36:41.444632587Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:84da1fc78d37190122f56c520913b0bfc454516bc5fdbdc209e2a5258afce8c3",
	        "ResolvConfPath": "/var/lib/docker/containers/7970e6365a84accd0e35f708e7caf4441d8dbc1bb442c3896edc190043df0968/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7970e6365a84accd0e35f708e7caf4441d8dbc1bb442c3896edc190043df0968/hostname",
	        "HostsPath": "/var/lib/docker/containers/7970e6365a84accd0e35f708e7caf4441d8dbc1bb442c3896edc190043df0968/hosts",
	        "LogPath": "/var/lib/docker/containers/7970e6365a84accd0e35f708e7caf4441d8dbc1bb442c3896edc190043df0968/7970e6365a84accd0e35f708e7caf4441d8dbc1bb442c3896edc190043df0968-json.log",
	        "Name": "/functional-728643",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-728643:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-728643",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7970e6365a84accd0e35f708e7caf4441d8dbc1bb442c3896edc190043df0968",
	                "LowerDir": "/var/lib/docker/overlay2/706ba6e7bb7bc941702d99ead04fd0680c435a61e91b26de1ff4d23a14e50e0b-init/diff:/var/lib/docker/overlay2/9995a0af7efc4d83e8e62526a6cf13ffc5df3bab5cee59077c863040f7e3e58d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/706ba6e7bb7bc941702d99ead04fd0680c435a61e91b26de1ff4d23a14e50e0b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/706ba6e7bb7bc941702d99ead04fd0680c435a61e91b26de1ff4d23a14e50e0b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/706ba6e7bb7bc941702d99ead04fd0680c435a61e91b26de1ff4d23a14e50e0b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-728643",
	                "Source": "/var/lib/docker/volumes/functional-728643/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-728643",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-728643",
	                "name.minikube.sigs.k8s.io": "functional-728643",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9073ec2eb32fc599e380f45860ce9195140df6299eea67a019e66e59bc247e53",
	            "SandboxKey": "/var/run/docker/netns/9073ec2eb32f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-728643": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:ac:25:37:4b:ea",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ab75a2fa891068a1dcce2fc6f5961d72e872823a22afc5d3fef06c350bffd5b1",
	                    "EndpointID": "1443df92e8a3d09c7c682d92a5bcff543b1ab1a366e37da8762857abbbede550",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-728643",
	                        "7970e6365a84"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-728643 -n functional-728643
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-728643 logs -n 25: (1.248409587s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-728643 ssh sudo umount -f /mount-9p                                                                     │ functional-728643 │ jenkins │ v1.37.0 │ 10 Oct 25 17:40 UTC │                     │
	│ mount          │ -p functional-728643 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1823634637/001:/mount2 --alsologtostderr -v=1 │ functional-728643 │ jenkins │ v1.37.0 │ 10 Oct 25 17:40 UTC │                     │
	│ ssh            │ functional-728643 ssh findmnt -T /mount1                                                                           │ functional-728643 │ jenkins │ v1.37.0 │ 10 Oct 25 17:40 UTC │                     │
	│ mount          │ -p functional-728643 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1823634637/001:/mount3 --alsologtostderr -v=1 │ functional-728643 │ jenkins │ v1.37.0 │ 10 Oct 25 17:40 UTC │                     │
	│ mount          │ -p functional-728643 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1823634637/001:/mount1 --alsologtostderr -v=1 │ functional-728643 │ jenkins │ v1.37.0 │ 10 Oct 25 17:40 UTC │                     │
	│ ssh            │ functional-728643 ssh findmnt -T /mount1                                                                           │ functional-728643 │ jenkins │ v1.37.0 │ 10 Oct 25 17:40 UTC │ 10 Oct 25 17:40 UTC │
	│ ssh            │ functional-728643 ssh findmnt -T /mount2                                                                           │ functional-728643 │ jenkins │ v1.37.0 │ 10 Oct 25 17:40 UTC │ 10 Oct 25 17:40 UTC │
	│ ssh            │ functional-728643 ssh findmnt -T /mount3                                                                           │ functional-728643 │ jenkins │ v1.37.0 │ 10 Oct 25 17:40 UTC │ 10 Oct 25 17:40 UTC │
	│ mount          │ -p functional-728643 --kill=true                                                                                   │ functional-728643 │ jenkins │ v1.37.0 │ 10 Oct 25 17:40 UTC │                     │
	│ start          │ -p functional-728643 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-728643 │ jenkins │ v1.37.0 │ 10 Oct 25 17:40 UTC │                     │
	│ update-context │ functional-728643 update-context --alsologtostderr -v=2                                                            │ functional-728643 │ jenkins │ v1.37.0 │ 10 Oct 25 17:40 UTC │ 10 Oct 25 17:40 UTC │
	│ update-context │ functional-728643 update-context --alsologtostderr -v=2                                                            │ functional-728643 │ jenkins │ v1.37.0 │ 10 Oct 25 17:40 UTC │ 10 Oct 25 17:40 UTC │
	│ update-context │ functional-728643 update-context --alsologtostderr -v=2                                                            │ functional-728643 │ jenkins │ v1.37.0 │ 10 Oct 25 17:40 UTC │ 10 Oct 25 17:40 UTC │
	│ image          │ functional-728643 image ls --format short --alsologtostderr                                                        │ functional-728643 │ jenkins │ v1.37.0 │ 10 Oct 25 17:40 UTC │ 10 Oct 25 17:40 UTC │
	│ image          │ functional-728643 image ls --format yaml --alsologtostderr                                                         │ functional-728643 │ jenkins │ v1.37.0 │ 10 Oct 25 17:40 UTC │ 10 Oct 25 17:40 UTC │
	│ ssh            │ functional-728643 ssh pgrep buildkitd                                                                              │ functional-728643 │ jenkins │ v1.37.0 │ 10 Oct 25 17:40 UTC │                     │
	│ image          │ functional-728643 image build -t localhost/my-image:functional-728643 testdata/build --alsologtostderr             │ functional-728643 │ jenkins │ v1.37.0 │ 10 Oct 25 17:40 UTC │ 10 Oct 25 17:40 UTC │
	│ image          │ functional-728643 image ls --format json --alsologtostderr                                                         │ functional-728643 │ jenkins │ v1.37.0 │ 10 Oct 25 17:40 UTC │ 10 Oct 25 17:40 UTC │
	│ image          │ functional-728643 image ls --format table --alsologtostderr                                                        │ functional-728643 │ jenkins │ v1.37.0 │ 10 Oct 25 17:40 UTC │ 10 Oct 25 17:40 UTC │
	│ image          │ functional-728643 image ls                                                                                         │ functional-728643 │ jenkins │ v1.37.0 │ 10 Oct 25 17:40 UTC │ 10 Oct 25 17:40 UTC │
	│ service        │ functional-728643 service list                                                                                     │ functional-728643 │ jenkins │ v1.37.0 │ 10 Oct 25 17:49 UTC │ 10 Oct 25 17:49 UTC │
	│ service        │ functional-728643 service list -o json                                                                             │ functional-728643 │ jenkins │ v1.37.0 │ 10 Oct 25 17:49 UTC │ 10 Oct 25 17:49 UTC │
	│ service        │ functional-728643 service --namespace=default --https --url hello-node                                             │ functional-728643 │ jenkins │ v1.37.0 │ 10 Oct 25 17:49 UTC │                     │
	│ service        │ functional-728643 service hello-node --url --format={{.IP}}                                                        │ functional-728643 │ jenkins │ v1.37.0 │ 10 Oct 25 17:49 UTC │                     │
	│ service        │ functional-728643 service hello-node --url                                                                         │ functional-728643 │ jenkins │ v1.37.0 │ 10 Oct 25 17:49 UTC │                     │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/10 17:40:11
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 17:40:11.729673   49455 out.go:360] Setting OutFile to fd 1 ...
	I1010 17:40:11.729909   49455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:40:11.729919   49455 out.go:374] Setting ErrFile to fd 2...
	I1010 17:40:11.729923   49455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:40:11.730245   49455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 17:40:11.730644   49455 out.go:368] Setting JSON to false
	I1010 17:40:11.731736   49455 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1352,"bootTime":1760116660,"procs":251,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 17:40:11.731850   49455 start.go:141] virtualization: kvm guest
	I1010 17:40:11.733823   49455 out.go:179] * [functional-728643] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1010 17:40:11.735024   49455 out.go:179]   - MINIKUBE_LOCATION=21724
	I1010 17:40:11.735030   49455 notify.go:220] Checking for updates...
	I1010 17:40:11.737597   49455 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 17:40:11.738806   49455 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 17:40:11.739930   49455 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-5815/.minikube
	I1010 17:40:11.740924   49455 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 17:40:11.741886   49455 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 17:40:11.743192   49455 config.go:182] Loaded profile config "functional-728643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:40:11.743695   49455 driver.go:421] Setting default libvirt URI to qemu:///system
	I1010 17:40:11.765909   49455 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1010 17:40:11.766009   49455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 17:40:11.819932   49455 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-10 17:40:11.809766984 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 17:40:11.820035   49455 docker.go:318] overlay module found
	I1010 17:40:11.822639   49455 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1010 17:40:11.823816   49455 start.go:305] selected driver: docker
	I1010 17:40:11.823831   49455 start.go:925] validating driver "docker" against &{Name:functional-728643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-728643 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 17:40:11.823933   49455 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 17:40:11.825758   49455 out.go:203] 
	W1010 17:40:11.827082   49455 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1010 17:40:11.828128   49455 out.go:203] 
	
	
	==> CRI-O <==
	Oct 10 17:40:07 functional-728643 crio[3590]: time="2025-10-10T17:40:07.060224937Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 17:40:07 functional-728643 crio[3590]: time="2025-10-10T17:40:07.060454963Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1be521751dc187d85095ffa7d5e25d805ec77c331afc88f9daa0fe1f7e70e519/merged/etc/group: no such file or directory"
	Oct 10 17:40:07 functional-728643 crio[3590]: time="2025-10-10T17:40:07.060874589Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 17:40:07 functional-728643 crio[3590]: time="2025-10-10T17:40:07.087214869Z" level=info msg="Created container c6b69b252ea600959a85f367757efe7ca63dae107a6a05df8c9fa8d4adb8f7d5: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-7jdcf/dashboard-metrics-scraper" id=0638efd2-bed3-4a39-986d-9f43092768ec name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 17:40:07 functional-728643 crio[3590]: time="2025-10-10T17:40:07.087847502Z" level=info msg="Starting container: c6b69b252ea600959a85f367757efe7ca63dae107a6a05df8c9fa8d4adb8f7d5" id=2f6a0435-785f-4f5a-ac99-5aaa243c080f name=/runtime.v1.RuntimeService/StartContainer
	Oct 10 17:40:07 functional-728643 crio[3590]: time="2025-10-10T17:40:07.089532015Z" level=info msg="Started container" PID=7375 containerID=c6b69b252ea600959a85f367757efe7ca63dae107a6a05df8c9fa8d4adb8f7d5 description=kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-7jdcf/dashboard-metrics-scraper id=2f6a0435-785f-4f5a-ac99-5aaa243c080f name=/runtime.v1.RuntimeService/StartContainer sandboxID=e7855223f8398b585a7281de76f4f7978d2ebf2557dceb806c9c0098769e56e1
	Oct 10 17:40:10 functional-728643 crio[3590]: time="2025-10-10T17:40:10.969447243Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029" id=7ccad21f-6118-488f-9a5e-c736e38c3afe name=/runtime.v1.ImageService/PullImage
	Oct 10 17:40:10 functional-728643 crio[3590]: time="2025-10-10T17:40:10.970137051Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=2ff9405a-4784-4f7f-be6e-edcf8832a984 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 17:40:10 functional-728643 crio[3590]: time="2025-10-10T17:40:10.971721595Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=4351cce4-2b85-4fda-92a8-0b348f4790d2 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 17:40:10 functional-728643 crio[3590]: time="2025-10-10T17:40:10.975760287Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5nqc7/kubernetes-dashboard" id=33e62206-1a9b-4947-9e7b-b9c5fd409c03 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 17:40:10 functional-728643 crio[3590]: time="2025-10-10T17:40:10.976575677Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 17:40:10 functional-728643 crio[3590]: time="2025-10-10T17:40:10.980695577Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 17:40:10 functional-728643 crio[3590]: time="2025-10-10T17:40:10.980893643Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9df1ceadab25a339adb251e3facf79b18749649d38f7157655f019f3fd27cffe/merged/etc/group: no such file or directory"
	Oct 10 17:40:10 functional-728643 crio[3590]: time="2025-10-10T17:40:10.98136857Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 17:40:11 functional-728643 crio[3590]: time="2025-10-10T17:40:11.015866065Z" level=info msg="Created container 0ae7075d2366689b07ce1dfb5910c7950a150cae652abbd6bce9fad52c367206: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5nqc7/kubernetes-dashboard" id=33e62206-1a9b-4947-9e7b-b9c5fd409c03 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 17:40:11 functional-728643 crio[3590]: time="2025-10-10T17:40:11.016538565Z" level=info msg="Starting container: 0ae7075d2366689b07ce1dfb5910c7950a150cae652abbd6bce9fad52c367206" id=64300538-7575-42b0-9898-c81c26f9f003 name=/runtime.v1.RuntimeService/StartContainer
	Oct 10 17:40:11 functional-728643 crio[3590]: time="2025-10-10T17:40:11.018796542Z" level=info msg="Started container" PID=7590 containerID=0ae7075d2366689b07ce1dfb5910c7950a150cae652abbd6bce9fad52c367206 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5nqc7/kubernetes-dashboard id=64300538-7575-42b0-9898-c81c26f9f003 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0a8b43f94f61fde317557405e2a22ab5c1327dd1b423823e26ed1e476fd5bfc4
	Oct 10 17:40:15 functional-728643 crio[3590]: time="2025-10-10T17:40:15.912080071Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=886ccadf-b740-4042-8077-d001e32f8b77 name=/runtime.v1.ImageService/PullImage
	Oct 10 17:40:24 functional-728643 crio[3590]: time="2025-10-10T17:40:24.911635807Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d0734ff3-4d4f-4350-8601-b1c44a3c301b name=/runtime.v1.ImageService/PullImage
	Oct 10 17:41:03 functional-728643 crio[3590]: time="2025-10-10T17:41:03.911229643Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=27652dc3-acb3-4b4a-8760-01eb25e42f59 name=/runtime.v1.ImageService/PullImage
	Oct 10 17:41:13 functional-728643 crio[3590]: time="2025-10-10T17:41:13.911805399Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=3c2b93dd-b8d5-4424-8a64-cfe1e868d7fd name=/runtime.v1.ImageService/PullImage
	Oct 10 17:42:33 functional-728643 crio[3590]: time="2025-10-10T17:42:33.911077916Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=3dfefc2f-fc6d-45ef-9f67-1a66fb8835d4 name=/runtime.v1.ImageService/PullImage
	Oct 10 17:42:43 functional-728643 crio[3590]: time="2025-10-10T17:42:43.911681845Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e95d737f-e21f-4f0e-903e-51450b7e3f96 name=/runtime.v1.ImageService/PullImage
	Oct 10 17:45:17 functional-728643 crio[3590]: time="2025-10-10T17:45:17.911550309Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=91f996bf-5be8-45e4-a5b6-bcd1b71bc407 name=/runtime.v1.ImageService/PullImage
	Oct 10 17:45:27 functional-728643 crio[3590]: time="2025-10-10T17:45:27.911545026Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d1f9d9f8-4d11-403a-a4fa-8890c05190b0 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	0ae7075d23666       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   0a8b43f94f61f       kubernetes-dashboard-855c9754f9-5nqc7        kubernetes-dashboard
	c6b69b252ea60       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   e7855223f8398       dashboard-metrics-scraper-77bf4d6c4c-7jdcf   kubernetes-dashboard
	d76346e272817       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   9997ee35271b0       busybox-mount                                default
	20a9658b4987a       docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115                  9 minutes ago       Running             myfrontend                  0                   caca403fd2eba       sp-pod                                       default
	2c47978b1ba83       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  10 minutes ago      Running             mysql                       0                   87389d2c3adcb       mysql-5bb876957f-xb8cr                       default
	2f7cc8dd4e30a       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                  10 minutes ago      Running             nginx                       0                   124e9965e549a       nginx-svc                                    default
	6efee8c4fe5d2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 11 minutes ago      Running             kube-controller-manager     3                   941d6de324f1e       kube-controller-manager-functional-728643    kube-system
	8f171d8c00997       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 11 minutes ago      Running             kube-apiserver              2                   bb0f2b2ffccb0       kube-apiserver-functional-728643             kube-system
	4ebf54137dd6f       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 11 minutes ago      Exited              kube-controller-manager     2                   941d6de324f1e       kube-controller-manager-functional-728643    kube-system
	a6ceb5ab68665       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 11 minutes ago      Exited              kube-apiserver              1                   bb0f2b2ffccb0       kube-apiserver-functional-728643             kube-system
	cf6510a25114e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Running             kube-scheduler              1                   754377bc33940       kube-scheduler-functional-728643             kube-system
	41b7c7a1c79ea       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 12 minutes ago      Running             etcd                        1                   f220225fbbc18       etcd-functional-728643                       kube-system
	ce76aaed6cb86       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 12 minutes ago      Running             storage-provisioner         1                   3bba37f0369f3       storage-provisioner                          kube-system
	24ad8b6e798aa       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 12 minutes ago      Running             coredns                     1                   c45f8f62394f9       coredns-66bc5c9577-dcrz2                     kube-system
	5366f50518556       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 12 minutes ago      Running             kindnet-cni                 1                   c747090f7cb4c       kindnet-68f9d                                kube-system
	b5a2b924b7f64       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 12 minutes ago      Running             kube-proxy                  1                   ebfc36efe9337       kube-proxy-24f7q                             kube-system
	0f9d490a0f3e8       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 12 minutes ago      Exited              coredns                     0                   c45f8f62394f9       coredns-66bc5c9577-dcrz2                     kube-system
	14aaacdce3ba1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 12 minutes ago      Exited              storage-provisioner         0                   3bba37f0369f3       storage-provisioner                          kube-system
	ea2f0a99159b0       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 12 minutes ago      Exited              kube-proxy                  0                   ebfc36efe9337       kube-proxy-24f7q                             kube-system
	2aa057576d8ac       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 12 minutes ago      Exited              kindnet-cni                 0                   c747090f7cb4c       kindnet-68f9d                                kube-system
	473a6e69b5e85       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 12 minutes ago      Exited              etcd                        0                   f220225fbbc18       etcd-functional-728643                       kube-system
	d3523b9a3a786       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 12 minutes ago      Exited              kube-scheduler              0                   754377bc33940       kube-scheduler-functional-728643             kube-system
	
	
	==> coredns [0f9d490a0f3e8ded9ea421653be01d3be7ff3d767a67f1aa48f1f36b6c2803a7] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46480 - 41493 "HINFO IN 4959159980072817136.4447913344823953293. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.047106115s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [24ad8b6e798aabdc5d258504cf3fdb777da2a72c3f737b849239400596d33cf8] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49174 - 20880 "HINFO IN 6985471402424246899.5459241500264366093. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.031781115s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=474": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?resourceVersion=474": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               functional-728643
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-728643
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad692bf4ab89f0e135b80e730ae25010479ecc46
	                    minikube.k8s.io/name=functional-728643
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_10T17_36_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 10 Oct 2025 17:36:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-728643
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 10 Oct 2025 17:49:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 10 Oct 2025 17:49:40 +0000   Fri, 10 Oct 2025 17:36:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 10 Oct 2025 17:49:40 +0000   Fri, 10 Oct 2025 17:36:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 10 Oct 2025 17:49:40 +0000   Fri, 10 Oct 2025 17:36:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 10 Oct 2025 17:49:40 +0000   Fri, 10 Oct 2025 17:37:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-728643
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 6694834041ede3e9eb1b67e168e90e0c
	  System UUID:                57bd41b1-a52b-4931-b3fd-3ebf96fab91a
	  Boot ID:                    830c8438-99e6-48ba-b543-66e651cad0c8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-p8sts                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-nlq8q           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-xb8cr                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m50s
	  kube-system                 coredns-66bc5c9577-dcrz2                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-728643                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-68f9d                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-728643              250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-functional-728643     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-24f7q                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-728643              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-7jdcf    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m42s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-5nqc7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 12m                kube-proxy       
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-728643 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-728643 status is now: NodeHasSufficientMemory
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-728643 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-728643 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-728643 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-728643 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           12m                node-controller  Node functional-728643 event: Registered Node functional-728643 in Controller
	  Normal  NodeReady                12m                kubelet          Node functional-728643 status is now: NodeReady
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-728643 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-728643 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-728643 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                node-controller  Node functional-728643 event: Registered Node functional-728643 in Controller
	
	
	==> dmesg <==
	[  +0.077121] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.021628] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.602398] kauditd_printk_skb: 47 callbacks suppressed
	[Oct10 17:33] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[  +1.057549] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[  +1.023904] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[  +1.023945] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[  +1.024888] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[  +1.022912] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[  +2.047862] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[  +4.031726] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[  +8.191358] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[ +16.382802] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[Oct10 17:34] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	
	
	==> etcd [41b7c7a1c79ea9d3a4b88446f346be88e4ef024fb445be9d32095a128bb80cc4] <==
	{"level":"warn","ts":"2025-10-10T17:38:23.072087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:38:23.077909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:38:23.083737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:38:23.089818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:38:23.095877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:38:23.102665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:38:23.108857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:38:23.115082Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:38:23.121761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:38:23.127874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:38:23.133891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:38:23.140146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:38:23.146558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:38:23.152522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:38:23.158662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:38:23.164447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:38:23.170656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:38:23.176783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:38:23.188321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:38:23.194811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:38:23.200951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:38:23.247102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55430","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-10T17:48:22.804728Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1131}
	{"level":"info","ts":"2025-10-10T17:48:22.823814Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1131,"took":"18.733935ms","hash":2539497060,"current-db-size-bytes":3358720,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1540096,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-10-10T17:48:22.823866Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2539497060,"revision":1131,"compact-revision":-1}
	
	
	==> etcd [473a6e69b5e85fb5db9ddc15c0a6b6ae5a626fe3d0a069f319bbc9603cae7c62] <==
	{"level":"warn","ts":"2025-10-10T17:36:51.587565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:36:51.594905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:36:51.601818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:36:51.613128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:36:51.619164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:36:51.626888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T17:36:51.675315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54624","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-10T17:37:37.188454Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-10T17:37:37.188536Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-728643","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-10T17:37:37.188626Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-10T17:37:44.190032Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-10T17:37:44.190182Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-10T17:37:44.190212Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-10T17:37:44.190338Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-10T17:37:44.190356Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-10T17:37:44.190752Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-10T17:37:44.190835Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-10T17:37:44.190844Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-10T17:37:44.191213Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-10T17:37:44.191275Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-10T17:37:44.191290Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-10T17:37:44.192925Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-10T17:37:44.192990Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-10T17:37:44.193022Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-10T17:37:44.193034Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-728643","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 17:49:46 up 32 min,  0 user,  load average: 0.01, 0.14, 0.27
	Linux functional-728643 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2aa057576d8ac7d306d1bfcbd5a8372a0261f48598fb5ef040d3a2da10958d3b] <==
	I1010 17:37:01.057542       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1010 17:37:01.057835       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1010 17:37:01.057977       1 main.go:148] setting mtu 1500 for CNI 
	I1010 17:37:01.057992       1 main.go:178] kindnetd IP family: "ipv4"
	I1010 17:37:01.058019       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-10T17:37:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1010 17:37:01.170930       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1010 17:37:01.170952       1 controller.go:381] "Waiting for informer caches to sync"
	I1010 17:37:01.170972       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1010 17:37:01.171144       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1010 17:37:01.556454       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1010 17:37:01.556548       1 metrics.go:72] Registering metrics
	I1010 17:37:01.556620       1 controller.go:711] "Syncing nftables rules"
	I1010 17:37:11.171218       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:37:11.171299       1 main.go:301] handling current node
	I1010 17:37:21.178154       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:37:21.178182       1 main.go:301] handling current node
	I1010 17:37:31.175174       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:37:31.175211       1 main.go:301] handling current node
	
	
	==> kindnet [5366f50518556b3b90b37a75548c6ed4aaddb04f1805a600940fc3f3a71d03cf] <==
	I1010 17:47:37.934275       1 main.go:301] handling current node
	I1010 17:47:47.937689       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:47:47.937726       1 main.go:301] handling current node
	I1010 17:47:57.930197       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:47:57.930243       1 main.go:301] handling current node
	I1010 17:48:07.929689       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:48:07.929739       1 main.go:301] handling current node
	I1010 17:48:17.938134       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:48:17.938176       1 main.go:301] handling current node
	I1010 17:48:27.931606       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:48:27.931643       1 main.go:301] handling current node
	I1010 17:48:37.934916       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:48:37.934962       1 main.go:301] handling current node
	I1010 17:48:47.935227       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:48:47.935264       1 main.go:301] handling current node
	I1010 17:48:57.931799       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:48:57.931839       1 main.go:301] handling current node
	I1010 17:49:07.939082       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:49:07.939114       1 main.go:301] handling current node
	I1010 17:49:17.938450       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:49:17.938483       1 main.go:301] handling current node
	I1010 17:49:27.929577       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:49:27.929609       1 main.go:301] handling current node
	I1010 17:49:37.934719       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1010 17:49:37.934756       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8f171d8c00997e3fa609f7313e948d2a1584ea7c2c5999eecd116378ba484e1e] <==
	I1010 17:38:23.721197       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1010 17:38:23.721587       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1010 17:38:23.744623       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1010 17:38:24.619403       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1010 17:38:24.827137       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1010 17:38:24.828383       1 controller.go:667] quota admission added evaluator for: endpoints
	I1010 17:38:24.832838       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1010 17:38:43.051821       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1010 17:38:44.530327       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1010 17:38:59.225366       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.100.120.191"}
	I1010 17:39:03.091775       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1010 17:39:03.186657       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.179.63"}
	I1010 17:39:03.987817       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.109.27.167"}
	I1010 17:39:05.722211       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.104.155.133"}
	I1010 17:39:44.423545       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.104.254.19"}
	E1010 17:39:50.843861       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:54598: use of closed network connection
	E1010 17:39:52.396072       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:54612: use of closed network connection
	E1010 17:39:55.240626       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:56446: use of closed network connection
	E1010 17:40:03.492610       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:56588: use of closed network connection
	I1010 17:40:04.697275       1 controller.go:667] quota admission added evaluator for: namespaces
	I1010 17:40:04.745291       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1010 17:40:04.755672       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1010 17:40:04.838214       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.187.98"}
	I1010 17:40:04.849628       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.4.135"}
	I1010 17:48:23.649579       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-apiserver [a6ceb5ab68665676a57f320b85b5838f0df7109e39d7e15434f66a2f806cfdb0] <==
	I1010 17:37:58.068513       1 options.go:263] external host was not specified, using 192.168.49.2
	I1010 17:37:58.071312       1 server.go:150] Version: v1.34.1
	I1010 17:37:58.071342       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1010 17:37:58.071607       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	
	==> kube-controller-manager [4ebf54137dd6f78d48c66baca102d7be6b82afd894605d14c3d0733ad1ecc59f] <==
	I1010 17:38:09.928981       1 serving.go:386] Generated self-signed cert in-memory
	I1010 17:38:10.678853       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1010 17:38:10.678876       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 17:38:10.680109       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1010 17:38:10.680113       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1010 17:38:10.680433       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1010 17:38:10.680463       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1010 17:38:20.681776       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-controller-manager [6efee8c4fe5d2e2c37d6b418b44002061a2bb862ff8d3a9157286980709b7e4f] <==
	I1010 17:38:44.426123       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1010 17:38:44.426285       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1010 17:38:44.427410       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1010 17:38:44.427432       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1010 17:38:44.429996       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1010 17:38:44.431108       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1010 17:38:44.431138       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1010 17:38:44.431168       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1010 17:38:44.431169       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1010 17:38:44.431231       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1010 17:38:44.431242       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1010 17:38:44.431249       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1010 17:38:44.433381       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1010 17:38:44.435520       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1010 17:38:44.437770       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1010 17:38:44.448096       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1010 17:38:44.448111       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1010 17:38:44.448121       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1010 17:38:44.448606       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1010 17:40:04.746977       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1010 17:40:04.751592       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1010 17:40:04.753163       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1010 17:40:04.755548       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1010 17:40:04.757625       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1010 17:40:04.762241       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [b5a2b924b7f64374bba93b95ac85f4415bb2661169541cfa4716bc732e679fd7] <==
	I1010 17:37:37.695696       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1010 17:37:37.796369       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1010 17:37:37.796414       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1010 17:37:37.796546       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1010 17:37:37.815544       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1010 17:37:37.815617       1 server_linux.go:132] "Using iptables Proxier"
	I1010 17:37:37.820772       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1010 17:37:37.821146       1 server.go:527] "Version info" version="v1.34.1"
	I1010 17:37:37.821185       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 17:37:37.822558       1 config.go:403] "Starting serviceCIDR config controller"
	I1010 17:37:37.822575       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1010 17:37:37.822594       1 config.go:200] "Starting service config controller"
	I1010 17:37:37.822601       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1010 17:37:37.822625       1 config.go:106] "Starting endpoint slice config controller"
	I1010 17:37:37.822630       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1010 17:37:37.822649       1 config.go:309] "Starting node config controller"
	I1010 17:37:37.822660       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1010 17:37:37.822666       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1010 17:37:37.923217       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1010 17:37:37.923226       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1010 17:37:37.923243       1 shared_informer.go:356] "Caches are synced" controller="service config"
	E1010 17:37:55.882883       1 reflector.go:205] "Failed to watch" err="endpointslices.discovery.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"endpointslices\" in API group \"discovery.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1010 17:37:55.882876       1 reflector.go:205] "Failed to watch" err="nodes \"functional-728643\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1010 17:37:55.882876       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1010 17:37:55.882884       1 reflector.go:205] "Failed to watch" err="servicecidrs.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"servicecidrs\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	
	
	==> kube-proxy [ea2f0a99159b01313e0439bbd6c4417cbb02fb97a21f48f85f2fbc15ae9c7595] <==
	I1010 17:37:00.813635       1 server_linux.go:53] "Using iptables proxy"
	I1010 17:37:00.870775       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1010 17:37:00.971891       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1010 17:37:00.971926       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1010 17:37:00.972022       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1010 17:37:00.990312       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1010 17:37:00.990378       1 server_linux.go:132] "Using iptables Proxier"
	I1010 17:37:00.995640       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1010 17:37:00.995955       1 server.go:527] "Version info" version="v1.34.1"
	I1010 17:37:00.995994       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 17:37:00.997217       1 config.go:309] "Starting node config controller"
	I1010 17:37:00.997244       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1010 17:37:00.997254       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1010 17:37:00.997311       1 config.go:403] "Starting serviceCIDR config controller"
	I1010 17:37:00.997318       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1010 17:37:00.997362       1 config.go:200] "Starting service config controller"
	I1010 17:37:00.997379       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1010 17:37:00.997410       1 config.go:106] "Starting endpoint slice config controller"
	I1010 17:37:00.997417       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1010 17:37:01.098139       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1010 17:37:01.098219       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1010 17:37:01.098229       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [cf6510a25114e5897de858f3675dcdcfb9305d55ba0368883e739d50cecc63b0] <==
	I1010 17:37:58.131360       1 serving.go:386] Generated self-signed cert in-memory
	I1010 17:37:58.638943       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1010 17:37:58.638965       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 17:37:58.643378       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1010 17:37:58.643393       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1010 17:37:58.643405       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1010 17:37:58.643417       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1010 17:37:58.643427       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1010 17:37:58.643415       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1010 17:37:58.643838       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1010 17:37:58.644406       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1010 17:37:58.743648       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1010 17:37:58.743683       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1010 17:37:58.743649       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	E1010 17:38:23.624875       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1010 17:38:23.633516       1 reflector.go:205] "Failed to watch" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1010 17:38:23.633559       1 reflector.go:205] "Failed to watch" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1010 17:39:11.148223       1 schedule_one.go:191] "Status after running PostFilter plugins for pod" logger="UnhandledError" pod="default/sp-pod" status="not found"
	E1010 17:39:15.053415       1 schedule_one.go:191] "Status after running PostFilter plugins for pod" logger="UnhandledError" pod="default/sp-pod" status="not found"
	
	
	==> kube-scheduler [d3523b9a3a7862d9c5b71cb4d05c91ac362821d6159dd25a13ee2818dd795c26] <==
	E1010 17:36:52.310801       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1010 17:36:52.310902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1010 17:36:52.311019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1010 17:36:52.311165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1010 17:36:52.311212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1010 17:36:52.311247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1010 17:36:52.311252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1010 17:36:52.311300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1010 17:36:52.311327       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1010 17:36:52.311339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1010 17:36:52.311372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1010 17:36:52.311469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1010 17:36:52.311479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1010 17:36:52.311547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1010 17:36:53.141454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1010 17:36:53.142318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1010 17:36:53.146614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1010 17:36:53.206788       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1010 17:36:53.909039       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1010 17:37:54.918845       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1010 17:37:54.918877       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1010 17:37:54.918911       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1010 17:37:54.918935       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1010 17:37:54.918962       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1010 17:37:54.918981       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 10 17:47:12 functional-728643 kubelet[4293]: E1010 17:47:12.911078    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nlq8q" podUID="076b7f61-418e-4e38-ab3b-c3b50ec7e9d1"
	Oct 10 17:47:14 functional-728643 kubelet[4293]: E1010 17:47:14.911448    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-p8sts" podUID="bbee8056-6952-4759-bfef-d9259d8c29ba"
	Oct 10 17:47:23 functional-728643 kubelet[4293]: E1010 17:47:23.910941    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nlq8q" podUID="076b7f61-418e-4e38-ab3b-c3b50ec7e9d1"
	Oct 10 17:47:28 functional-728643 kubelet[4293]: E1010 17:47:28.911221    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-p8sts" podUID="bbee8056-6952-4759-bfef-d9259d8c29ba"
	Oct 10 17:47:37 functional-728643 kubelet[4293]: E1010 17:47:37.911094    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nlq8q" podUID="076b7f61-418e-4e38-ab3b-c3b50ec7e9d1"
	Oct 10 17:47:42 functional-728643 kubelet[4293]: E1010 17:47:42.911134    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-p8sts" podUID="bbee8056-6952-4759-bfef-d9259d8c29ba"
	Oct 10 17:47:51 functional-728643 kubelet[4293]: E1010 17:47:51.910550    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nlq8q" podUID="076b7f61-418e-4e38-ab3b-c3b50ec7e9d1"
	Oct 10 17:47:53 functional-728643 kubelet[4293]: E1010 17:47:53.911301    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-p8sts" podUID="bbee8056-6952-4759-bfef-d9259d8c29ba"
	Oct 10 17:48:04 functional-728643 kubelet[4293]: E1010 17:48:04.911406    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nlq8q" podUID="076b7f61-418e-4e38-ab3b-c3b50ec7e9d1"
	Oct 10 17:48:08 functional-728643 kubelet[4293]: E1010 17:48:08.912787    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-p8sts" podUID="bbee8056-6952-4759-bfef-d9259d8c29ba"
	Oct 10 17:48:15 functional-728643 kubelet[4293]: E1010 17:48:15.910896    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nlq8q" podUID="076b7f61-418e-4e38-ab3b-c3b50ec7e9d1"
	Oct 10 17:48:22 functional-728643 kubelet[4293]: E1010 17:48:22.911091    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-p8sts" podUID="bbee8056-6952-4759-bfef-d9259d8c29ba"
	Oct 10 17:48:28 functional-728643 kubelet[4293]: E1010 17:48:28.910838    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nlq8q" podUID="076b7f61-418e-4e38-ab3b-c3b50ec7e9d1"
	Oct 10 17:48:34 functional-728643 kubelet[4293]: E1010 17:48:34.911363    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-p8sts" podUID="bbee8056-6952-4759-bfef-d9259d8c29ba"
	Oct 10 17:48:43 functional-728643 kubelet[4293]: E1010 17:48:43.910966    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nlq8q" podUID="076b7f61-418e-4e38-ab3b-c3b50ec7e9d1"
	Oct 10 17:48:45 functional-728643 kubelet[4293]: E1010 17:48:45.910695    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-p8sts" podUID="bbee8056-6952-4759-bfef-d9259d8c29ba"
	Oct 10 17:48:56 functional-728643 kubelet[4293]: E1010 17:48:56.911039    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nlq8q" podUID="076b7f61-418e-4e38-ab3b-c3b50ec7e9d1"
	Oct 10 17:48:58 functional-728643 kubelet[4293]: E1010 17:48:58.910658    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-p8sts" podUID="bbee8056-6952-4759-bfef-d9259d8c29ba"
	Oct 10 17:49:08 functional-728643 kubelet[4293]: E1010 17:49:08.911598    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nlq8q" podUID="076b7f61-418e-4e38-ab3b-c3b50ec7e9d1"
	Oct 10 17:49:12 functional-728643 kubelet[4293]: E1010 17:49:12.912882    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-p8sts" podUID="bbee8056-6952-4759-bfef-d9259d8c29ba"
	Oct 10 17:49:19 functional-728643 kubelet[4293]: E1010 17:49:19.910916    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nlq8q" podUID="076b7f61-418e-4e38-ab3b-c3b50ec7e9d1"
	Oct 10 17:49:24 functional-728643 kubelet[4293]: E1010 17:49:24.910848    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-p8sts" podUID="bbee8056-6952-4759-bfef-d9259d8c29ba"
	Oct 10 17:49:30 functional-728643 kubelet[4293]: E1010 17:49:30.911439    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nlq8q" podUID="076b7f61-418e-4e38-ab3b-c3b50ec7e9d1"
	Oct 10 17:49:38 functional-728643 kubelet[4293]: E1010 17:49:38.910891    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-p8sts" podUID="bbee8056-6952-4759-bfef-d9259d8c29ba"
	Oct 10 17:49:44 functional-728643 kubelet[4293]: E1010 17:49:44.913480    4293 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-nlq8q" podUID="076b7f61-418e-4e38-ab3b-c3b50ec7e9d1"
	
	
	==> kubernetes-dashboard [0ae7075d2366689b07ce1dfb5910c7950a150cae652abbd6bce9fad52c367206] <==
	2025/10/10 17:40:11 Starting overwatch
	2025/10/10 17:40:11 Using namespace: kubernetes-dashboard
	2025/10/10 17:40:11 Using in-cluster config to connect to apiserver
	2025/10/10 17:40:11 Using secret token for csrf signing
	2025/10/10 17:40:11 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/10 17:40:11 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/10 17:40:11 Successful initial request to the apiserver, version: v1.34.1
	2025/10/10 17:40:11 Generating JWE encryption key
	2025/10/10 17:40:11 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/10 17:40:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/10 17:40:11 Initializing JWE encryption key from synchronized object
	2025/10/10 17:40:11 Creating in-cluster Sidecar client
	2025/10/10 17:40:11 Serving insecurely on HTTP port: 9090
	2025/10/10 17:40:11 Successful request to sidecar
	
	
	==> storage-provisioner [14aaacdce3ba16ead7f13eaf5182d3e843487fb9e6b6f5e675295645b75adeb9] <==
	I1010 17:37:11.787689       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-728643_3c13683d-a972-489c-ba80-eea9281e804f!
	W1010 17:37:13.694720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:37:13.698323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:37:15.701646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:37:15.705402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:37:17.708785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:37:17.712629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:37:19.715355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:37:19.720083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:37:21.724183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:37:21.728157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:37:23.735420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:37:23.750150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:37:25.754253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:37:25.760017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:37:27.762915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:37:27.766834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:37:29.770525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:37:29.774329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:37:31.777419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:37:31.782380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:37:33.785349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:37:33.789914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:37:35.792741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:37:35.796523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ce76aaed6cb86c4cb0677745c30cd09bff478fd37cd729a4e341b155d4f43aa4] <==
	W1010 17:49:22.233974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:49:24.237285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:49:24.242297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:49:26.245096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:49:26.249110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:49:28.252112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:49:28.256839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:49:30.259581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:49:30.263275       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:49:32.266249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:49:32.269904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:49:34.273210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:49:34.277805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:49:36.280779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:49:36.284370       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:49:38.287795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:49:38.292470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:49:40.295585       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:49:40.299139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:49:42.304245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:49:42.307856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:49:44.311230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:49:44.315750       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:49:46.318904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 17:49:46.324143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-728643 -n functional-728643
helpers_test.go:269: (dbg) Run:  kubectl --context functional-728643 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-p8sts hello-node-connect-7d85dfc575-nlq8q
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-728643 describe pod busybox-mount hello-node-75c85bcc94-p8sts hello-node-connect-7d85dfc575-nlq8q
helpers_test.go:290: (dbg) kubectl --context functional-728643 describe pod busybox-mount hello-node-75c85bcc94-p8sts hello-node-connect-7d85dfc575-nlq8q:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-728643/192.168.49.2
	Start Time:       Fri, 10 Oct 2025 17:40:02 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://d76346e272817a7128aa29fbbbabb0616f17e9c0c23d71df4acd234ae6c09563
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 10 Oct 2025 17:40:05 +0000
	      Finished:     Fri, 10 Oct 2025 17:40:05 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bz8cr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-bz8cr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m44s  default-scheduler  Successfully assigned default/busybox-mount to functional-728643
	  Normal  Pulling    9m44s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m42s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.061s (2.061s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m42s  kubelet            Created container: mount-munger
	  Normal  Started    9m42s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-p8sts
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-728643/192.168.49.2
	Start Time:       Fri, 10 Oct 2025 17:39:34 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qr2b2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-qr2b2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-p8sts to functional-728643
	  Normal   Pulling    7m14s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m14s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m14s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    9s (x43 over 10m)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     9s (x43 over 10m)    kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-nlq8q
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-728643/192.168.49.2
	Start Time:       Fri, 10 Oct 2025 17:39:44 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kz98q (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kz98q:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-nlq8q to functional-728643
	  Normal   Pulling    7m4s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m4s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m4s (x5 over 10m)      kubelet            Error: ErrImagePull
	  Warning  Failed     4m56s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3s (x43 over 9m59s)     kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.81s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-728643 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-728643 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-p8sts" [bbee8056-6952-4759-bfef-d9259d8c29ba] Pending
helpers_test.go:352: "hello-node-75c85bcc94-p8sts" [bbee8056-6952-4759-bfef-d9259d8c29ba] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-728643 -n functional-728643
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-10 17:49:03.53743975 +0000 UTC m=+1194.719746084
functional_test.go:1460: (dbg) Run:  kubectl --context functional-728643 describe po hello-node-75c85bcc94-p8sts -n default
functional_test.go:1460: (dbg) kubectl --context functional-728643 describe po hello-node-75c85bcc94-p8sts -n default:
Name:             hello-node-75c85bcc94-p8sts
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-728643/192.168.49.2
Start Time:       Fri, 10 Oct 2025 17:39:34 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qr2b2 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-qr2b2:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  9m57s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-p8sts to functional-728643
Normal   Pulling    6m30s (x5 over 9m29s)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m30s (x5 over 9m29s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m30s (x5 over 9m29s)   kubelet            Error: ErrImagePull
Normal   BackOff    4m21s (x21 over 9m28s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m21s (x21 over 9m28s)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-728643 logs hello-node-75c85bcc94-p8sts -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-728643 logs hello-node-75c85bcc94-p8sts -n default: exit status 1 (73.606731ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-p8sts" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-728643 logs hello-node-75c85bcc94-p8sts -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 image load --daemon kicbase/echo-server:functional-728643 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-728643" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 image load --daemon kicbase/echo-server:functional-728643 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-728643" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-728643
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 image load --daemon kicbase/echo-server:functional-728643 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-728643" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 image save kicbase/echo-server:functional-728643 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1010 17:39:58.686430   45702 out.go:360] Setting OutFile to fd 1 ...
	I1010 17:39:58.686687   45702 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:39:58.686696   45702 out.go:374] Setting ErrFile to fd 2...
	I1010 17:39:58.686700   45702 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:39:58.686896   45702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 17:39:58.687476   45702 config.go:182] Loaded profile config "functional-728643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:39:58.687562   45702 config.go:182] Loaded profile config "functional-728643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:39:58.687902   45702 cli_runner.go:164] Run: docker container inspect functional-728643 --format={{.State.Status}}
	I1010 17:39:58.704844   45702 ssh_runner.go:195] Run: systemctl --version
	I1010 17:39:58.704888   45702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-728643
	I1010 17:39:58.722517   45702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/functional-728643/id_rsa Username:docker}
	I1010 17:39:58.817737   45702 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1010 17:39:58.817792   45702 cache_images.go:254] Failed to load cached images for "functional-728643": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1010 17:39:58.817813   45702 cache_images.go:266] failed pushing to: functional-728643

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-728643
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 image save --daemon kicbase/echo-server:functional-728643 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-728643
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-728643: exit status 1 (16.688245ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-728643

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-728643

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728643 service --namespace=default --https --url hello-node: exit status 115 (535.736583ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30205
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-728643 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728643 service hello-node --url --format={{.IP}}: exit status 115 (529.372578ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-728643 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728643 service hello-node --url: exit status 115 (525.604774ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30205
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-728643 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30205
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.34s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-509145 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-509145 --output=json --user=testUser: exit status 80 (2.33695054s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8af27a33-eaf0-44c9-8e65-db073d5e152c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-509145 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"35a7f92a-c353-42f4-b4c3-d4c85216fffd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-10T18:00:04Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"3b0fd144-a65c-455e-a42f-7c26182913c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-509145 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.34s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-509145 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-509145 --output=json --user=testUser: exit status 80 (1.653857369s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c0929b7c-f75e-421d-ac95-c8ace98a750a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-509145 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"f16eb418-2b27-4c07-b03e-72efc9f6b5fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-10T18:00:06Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"c4507555-1397-4c4c-af50-2457dad8c9f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-509145 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.65s)

                                                
                                    
x
+
TestPause/serial/Pause (6.94s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-950227 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-950227 --alsologtostderr -v=5: exit status 80 (1.846530731s)

                                                
                                                
-- stdout --
	* Pausing node pause-950227 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 18:16:25.942726  235391 out.go:360] Setting OutFile to fd 1 ...
	I1010 18:16:25.944458  235391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:16:25.944470  235391 out.go:374] Setting ErrFile to fd 2...
	I1010 18:16:25.944477  235391 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:16:25.944980  235391 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 18:16:25.945487  235391 out.go:368] Setting JSON to false
	I1010 18:16:25.945547  235391 mustload.go:65] Loading cluster: pause-950227
	I1010 18:16:25.945963  235391 config.go:182] Loaded profile config "pause-950227": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:16:25.946422  235391 cli_runner.go:164] Run: docker container inspect pause-950227 --format={{.State.Status}}
	I1010 18:16:25.972312  235391 host.go:66] Checking if "pause-950227" exists ...
	I1010 18:16:25.972791  235391 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:16:26.051941  235391 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:93 SystemTime:2025-10-10 18:16:26.040221333 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:16:26.052548  235391 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-950227 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1010 18:16:26.058313  235391 out.go:179] * Pausing node pause-950227 ... 
	I1010 18:16:26.059455  235391 host.go:66] Checking if "pause-950227" exists ...
	I1010 18:16:26.059726  235391 ssh_runner.go:195] Run: systemctl --version
	I1010 18:16:26.059774  235391 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-950227
	I1010 18:16:26.081215  235391 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/pause-950227/id_rsa Username:docker}
	I1010 18:16:26.183006  235391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:16:26.197139  235391 pause.go:52] kubelet running: true
	I1010 18:16:26.197205  235391 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1010 18:16:26.361520  235391 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1010 18:16:26.361614  235391 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1010 18:16:26.440169  235391 cri.go:89] found id: "24d3727a1caf7ea8454b0a63879e3ae97f0345d7c4584d8bfb3f63e99a305463"
	I1010 18:16:26.440205  235391 cri.go:89] found id: "138558857f353346de22e898bff08dda71fb11cc165efe7dc8b3858d37a9fa30"
	I1010 18:16:26.440211  235391 cri.go:89] found id: "920f58080664d2c8c58c355a2253b0d4613573151a9472f5391f0855e086f433"
	I1010 18:16:26.440216  235391 cri.go:89] found id: "902554105d2e25376fdf24065c080a718c0248dccbf615a635299f9a9f6aa896"
	I1010 18:16:26.440221  235391 cri.go:89] found id: "9348a460aca77374c1b39c27458b01cafa91171ea46cc13c27783556839e5407"
	I1010 18:16:26.440225  235391 cri.go:89] found id: "56bf019745e9db0c23a0f3e53dc0302edd1929504a7cdaec4c606f9db01934cd"
	I1010 18:16:26.440229  235391 cri.go:89] found id: "8c4a8297dbdf5c9c125d0dca45d097f2180144b13b90ba4a003137b26d6d6b77"
	I1010 18:16:26.440233  235391 cri.go:89] found id: ""
	I1010 18:16:26.440285  235391 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 18:16:26.455515  235391 retry.go:31] will retry after 277.641038ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:16:26Z" level=error msg="open /run/runc: no such file or directory"
	I1010 18:16:26.734069  235391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:16:26.751320  235391 pause.go:52] kubelet running: false
	I1010 18:16:26.751383  235391 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1010 18:16:26.868518  235391 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1010 18:16:26.868612  235391 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1010 18:16:26.940029  235391 cri.go:89] found id: "24d3727a1caf7ea8454b0a63879e3ae97f0345d7c4584d8bfb3f63e99a305463"
	I1010 18:16:26.940064  235391 cri.go:89] found id: "138558857f353346de22e898bff08dda71fb11cc165efe7dc8b3858d37a9fa30"
	I1010 18:16:26.940070  235391 cri.go:89] found id: "920f58080664d2c8c58c355a2253b0d4613573151a9472f5391f0855e086f433"
	I1010 18:16:26.940074  235391 cri.go:89] found id: "902554105d2e25376fdf24065c080a718c0248dccbf615a635299f9a9f6aa896"
	I1010 18:16:26.940078  235391 cri.go:89] found id: "9348a460aca77374c1b39c27458b01cafa91171ea46cc13c27783556839e5407"
	I1010 18:16:26.940082  235391 cri.go:89] found id: "56bf019745e9db0c23a0f3e53dc0302edd1929504a7cdaec4c606f9db01934cd"
	I1010 18:16:26.940087  235391 cri.go:89] found id: "8c4a8297dbdf5c9c125d0dca45d097f2180144b13b90ba4a003137b26d6d6b77"
	I1010 18:16:26.940091  235391 cri.go:89] found id: ""
	I1010 18:16:26.940130  235391 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 18:16:26.953759  235391 retry.go:31] will retry after 520.190877ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:16:26Z" level=error msg="open /run/runc: no such file or directory"
	I1010 18:16:27.474482  235391 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:16:27.490308  235391 pause.go:52] kubelet running: false
	I1010 18:16:27.490379  235391 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1010 18:16:27.636259  235391 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1010 18:16:27.636353  235391 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1010 18:16:27.707311  235391 cri.go:89] found id: "24d3727a1caf7ea8454b0a63879e3ae97f0345d7c4584d8bfb3f63e99a305463"
	I1010 18:16:27.707335  235391 cri.go:89] found id: "138558857f353346de22e898bff08dda71fb11cc165efe7dc8b3858d37a9fa30"
	I1010 18:16:27.707339  235391 cri.go:89] found id: "920f58080664d2c8c58c355a2253b0d4613573151a9472f5391f0855e086f433"
	I1010 18:16:27.707343  235391 cri.go:89] found id: "902554105d2e25376fdf24065c080a718c0248dccbf615a635299f9a9f6aa896"
	I1010 18:16:27.707348  235391 cri.go:89] found id: "9348a460aca77374c1b39c27458b01cafa91171ea46cc13c27783556839e5407"
	I1010 18:16:27.707353  235391 cri.go:89] found id: "56bf019745e9db0c23a0f3e53dc0302edd1929504a7cdaec4c606f9db01934cd"
	I1010 18:16:27.707357  235391 cri.go:89] found id: "8c4a8297dbdf5c9c125d0dca45d097f2180144b13b90ba4a003137b26d6d6b77"
	I1010 18:16:27.707360  235391 cri.go:89] found id: ""
	I1010 18:16:27.707406  235391 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 18:16:27.723640  235391 out.go:203] 
	W1010 18:16:27.724958  235391 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:16:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:16:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1010 18:16:27.724976  235391 out.go:285] * 
	* 
	W1010 18:16:27.730129  235391 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 18:16:27.731362  235391 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-950227 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-950227
helpers_test.go:243: (dbg) docker inspect pause-950227:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ad3e8dca8f39697e1fdfe876fecc5a1c601c7d4357eda580195531ba0687de27",
	        "Created": "2025-10-10T18:15:43.854047944Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 225710,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-10T18:15:43.891455702Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:84da1fc78d37190122f56c520913b0bfc454516bc5fdbdc209e2a5258afce8c3",
	        "ResolvConfPath": "/var/lib/docker/containers/ad3e8dca8f39697e1fdfe876fecc5a1c601c7d4357eda580195531ba0687de27/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ad3e8dca8f39697e1fdfe876fecc5a1c601c7d4357eda580195531ba0687de27/hostname",
	        "HostsPath": "/var/lib/docker/containers/ad3e8dca8f39697e1fdfe876fecc5a1c601c7d4357eda580195531ba0687de27/hosts",
	        "LogPath": "/var/lib/docker/containers/ad3e8dca8f39697e1fdfe876fecc5a1c601c7d4357eda580195531ba0687de27/ad3e8dca8f39697e1fdfe876fecc5a1c601c7d4357eda580195531ba0687de27-json.log",
	        "Name": "/pause-950227",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-950227:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-950227",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ad3e8dca8f39697e1fdfe876fecc5a1c601c7d4357eda580195531ba0687de27",
	                "LowerDir": "/var/lib/docker/overlay2/63bf4903969eefa14bf1d7feb584ba7169bc1df65299bea119d2942b1efb7d66-init/diff:/var/lib/docker/overlay2/9995a0af7efc4d83e8e62526a6cf13ffc5df3bab5cee59077c863040f7e3e58d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/63bf4903969eefa14bf1d7feb584ba7169bc1df65299bea119d2942b1efb7d66/merged",
	                "UpperDir": "/var/lib/docker/overlay2/63bf4903969eefa14bf1d7feb584ba7169bc1df65299bea119d2942b1efb7d66/diff",
	                "WorkDir": "/var/lib/docker/overlay2/63bf4903969eefa14bf1d7feb584ba7169bc1df65299bea119d2942b1efb7d66/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-950227",
	                "Source": "/var/lib/docker/volumes/pause-950227/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-950227",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-950227",
	                "name.minikube.sigs.k8s.io": "pause-950227",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5a538a6e09a319f165e67b26072992d998e4e21a2787143f9abe22a69f33c93f",
	            "SandboxKey": "/var/run/docker/netns/5a538a6e09a3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33048"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33050"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-950227": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:c2:8c:8d:17:dd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "75c955ba1b2d3ad658da9f6166da0611207c4d1565854365404b27a7e67636ba",
	                    "EndpointID": "c852e9482c55f8b4a92a59ffcb8dc707c7f639a85bced49b0b3c677993c3238b",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-950227",
	                        "ad3e8dca8f39"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-950227 -n pause-950227
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-950227 -n pause-950227: exit status 2 (333.192401ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-950227 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-950227 logs -n 25: (2.586356858s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p NoKubernetes-444917                                                                                                                   │ NoKubernetes-444917       │ jenkins │ v1.37.0 │ 10 Oct 25 18:13 UTC │ 10 Oct 25 18:13 UTC │
	│ ssh     │ cert-options-594273 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                              │ cert-options-594273       │ jenkins │ v1.37.0 │ 10 Oct 25 18:13 UTC │ 10 Oct 25 18:13 UTC │
	│ ssh     │ -p cert-options-594273 -- sudo cat /etc/kubernetes/admin.conf                                                                            │ cert-options-594273       │ jenkins │ v1.37.0 │ 10 Oct 25 18:13 UTC │ 10 Oct 25 18:13 UTC │
	│ delete  │ -p cert-options-594273                                                                                                                   │ cert-options-594273       │ jenkins │ v1.37.0 │ 10 Oct 25 18:13 UTC │ 10 Oct 25 18:13 UTC │
	│ delete  │ -p offline-crio-416783                                                                                                                   │ offline-crio-416783       │ jenkins │ v1.37.0 │ 10 Oct 25 18:13 UTC │ 10 Oct 25 18:13 UTC │
	│ start   │ -p kubernetes-upgrade-274910 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-274910 │ jenkins │ v1.37.0 │ 10 Oct 25 18:13 UTC │ 10 Oct 25 18:14 UTC │
	│ start   │ -p stopped-upgrade-839433 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-839433    │ jenkins │ v1.32.0 │ 10 Oct 25 18:14 UTC │ 10 Oct 25 18:14 UTC │
	│ start   │ -p missing-upgrade-085473 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-085473    │ jenkins │ v1.32.0 │ 10 Oct 25 18:14 UTC │ 10 Oct 25 18:14 UTC │
	│ stop    │ -p kubernetes-upgrade-274910                                                                                                             │ kubernetes-upgrade-274910 │ jenkins │ v1.37.0 │ 10 Oct 25 18:14 UTC │ 10 Oct 25 18:14 UTC │
	│ start   │ -p kubernetes-upgrade-274910 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-274910 │ jenkins │ v1.37.0 │ 10 Oct 25 18:14 UTC │                     │
	│ stop    │ stopped-upgrade-839433 stop                                                                                                              │ stopped-upgrade-839433    │ jenkins │ v1.32.0 │ 10 Oct 25 18:14 UTC │ 10 Oct 25 18:14 UTC │
	│ start   │ -p missing-upgrade-085473 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-085473    │ jenkins │ v1.37.0 │ 10 Oct 25 18:14 UTC │ 10 Oct 25 18:15 UTC │
	│ start   │ -p stopped-upgrade-839433 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-839433    │ jenkins │ v1.37.0 │ 10 Oct 25 18:14 UTC │ 10 Oct 25 18:15 UTC │
	│ delete  │ -p stopped-upgrade-839433                                                                                                                │ stopped-upgrade-839433    │ jenkins │ v1.37.0 │ 10 Oct 25 18:15 UTC │ 10 Oct 25 18:15 UTC │
	│ start   │ -p running-upgrade-390393 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-390393    │ jenkins │ v1.32.0 │ 10 Oct 25 18:15 UTC │ 10 Oct 25 18:15 UTC │
	│ start   │ -p running-upgrade-390393 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-390393    │ jenkins │ v1.37.0 │ 10 Oct 25 18:15 UTC │ 10 Oct 25 18:15 UTC │
	│ delete  │ -p missing-upgrade-085473                                                                                                                │ missing-upgrade-085473    │ jenkins │ v1.37.0 │ 10 Oct 25 18:15 UTC │ 10 Oct 25 18:15 UTC │
	│ start   │ -p pause-950227 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-950227              │ jenkins │ v1.37.0 │ 10 Oct 25 18:15 UTC │ 10 Oct 25 18:16 UTC │
	│ delete  │ -p running-upgrade-390393                                                                                                                │ running-upgrade-390393    │ jenkins │ v1.37.0 │ 10 Oct 25 18:15 UTC │ 10 Oct 25 18:15 UTC │
	│ start   │ -p auto-078032 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                  │ auto-078032               │ jenkins │ v1.37.0 │ 10 Oct 25 18:15 UTC │                     │
	│ start   │ -p cert-expiration-770491 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                │ cert-expiration-770491    │ jenkins │ v1.37.0 │ 10 Oct 25 18:16 UTC │ 10 Oct 25 18:16 UTC │
	│ start   │ -p pause-950227 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-950227              │ jenkins │ v1.37.0 │ 10 Oct 25 18:16 UTC │ 10 Oct 25 18:16 UTC │
	│ delete  │ -p cert-expiration-770491                                                                                                                │ cert-expiration-770491    │ jenkins │ v1.37.0 │ 10 Oct 25 18:16 UTC │ 10 Oct 25 18:16 UTC │
	│ start   │ -p kindnet-078032 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio │ kindnet-078032            │ jenkins │ v1.37.0 │ 10 Oct 25 18:16 UTC │                     │
	│ pause   │ -p pause-950227 --alsologtostderr -v=5                                                                                                   │ pause-950227              │ jenkins │ v1.37.0 │ 10 Oct 25 18:16 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/10 18:16:24
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 18:16:24.892923  234668 out.go:360] Setting OutFile to fd 1 ...
	I1010 18:16:24.893229  234668 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:16:24.893239  234668 out.go:374] Setting ErrFile to fd 2...
	I1010 18:16:24.893246  234668 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:16:24.893469  234668 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 18:16:24.893933  234668 out.go:368] Setting JSON to false
	I1010 18:16:24.895142  234668 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3525,"bootTime":1760116660,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 18:16:24.895228  234668 start.go:141] virtualization: kvm guest
	I1010 18:16:24.897072  234668 out.go:179] * [kindnet-078032] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1010 18:16:24.898234  234668 out.go:179]   - MINIKUBE_LOCATION=21724
	I1010 18:16:24.898244  234668 notify.go:220] Checking for updates...
	I1010 18:16:24.901267  234668 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 18:16:24.902398  234668 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:16:24.903578  234668 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-5815/.minikube
	I1010 18:16:24.904595  234668 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 18:16:24.905424  234668 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 18:16:24.907274  234668 config.go:182] Loaded profile config "auto-078032": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:16:24.907406  234668 config.go:182] Loaded profile config "kubernetes-upgrade-274910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:16:24.907512  234668 config.go:182] Loaded profile config "pause-950227": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:16:24.907598  234668 driver.go:421] Setting default libvirt URI to qemu:///system
	I1010 18:16:24.931545  234668 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1010 18:16:24.931623  234668 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:16:24.992630  234668 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-10 18:16:24.981804112 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:16:24.992780  234668 docker.go:318] overlay module found
	I1010 18:16:24.994397  234668 out.go:179] * Using the docker driver based on user configuration
	I1010 18:16:24.995555  234668 start.go:305] selected driver: docker
	I1010 18:16:24.995573  234668 start.go:925] validating driver "docker" against <nil>
	I1010 18:16:24.995589  234668 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 18:16:24.996197  234668 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:16:25.064808  234668 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-10 18:16:25.055143118 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:16:25.064942  234668 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1010 18:16:25.065167  234668 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:16:25.066893  234668 out.go:179] * Using Docker driver with root privileges
	I1010 18:16:25.068020  234668 cni.go:84] Creating CNI manager for "kindnet"
	I1010 18:16:25.068034  234668 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1010 18:16:25.068102  234668 start.go:349] cluster config:
	{Name:kindnet-078032 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-078032 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:16:25.069316  234668 out.go:179] * Starting "kindnet-078032" primary control-plane node in "kindnet-078032" cluster
	I1010 18:16:25.070371  234668 cache.go:123] Beginning downloading kic base image for docker with crio
	I1010 18:16:25.071766  234668 out.go:179] * Pulling base image v0.0.48-1760103811-21724 ...
	I1010 18:16:25.072848  234668 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:16:25.072880  234668 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1010 18:16:25.072892  234668 cache.go:58] Caching tarball of preloaded images
	I1010 18:16:25.072954  234668 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon
	I1010 18:16:25.072979  234668 preload.go:233] Found /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 18:16:25.072990  234668 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1010 18:16:25.073132  234668 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/kindnet-078032/config.json ...
	I1010 18:16:25.073165  234668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/kindnet-078032/config.json: {Name:mk7911dfb19849dfbc3cc73ed5965dd25330365d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:16:25.094671  234668 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon, skipping pull
	I1010 18:16:25.094689  234668 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 exists in daemon, skipping load
	I1010 18:16:25.094704  234668 cache.go:232] Successfully downloaded all kic artifacts
	I1010 18:16:25.094725  234668 start.go:360] acquireMachinesLock for kindnet-078032: {Name:mk3bc1e8dcc66493934ce868b1b36a0b08ea0f91 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:16:25.094832  234668 start.go:364] duration metric: took 85.486µs to acquireMachinesLock for "kindnet-078032"
	I1010 18:16:25.094861  234668 start.go:93] Provisioning new machine with config: &{Name:kindnet-078032 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-078032 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:16:25.094945  234668 start.go:125] createHost starting for "" (driver="docker")
	I1010 18:16:24.580283  228555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:16:25.080156  228555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:16:25.151088  228555 kubeadm.go:1113] duration metric: took 4.656056733s to wait for elevateKubeSystemPrivileges
	I1010 18:16:25.151123  228555 kubeadm.go:402] duration metric: took 15.763664329s to StartCluster
	I1010 18:16:25.151143  228555 settings.go:142] acquiring lock: {Name:mk32701f7c6313a55b8740f0862889585a36e8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:16:25.151226  228555 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:16:25.152856  228555 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:16:25.153768  228555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1010 18:16:25.153774  228555 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:16:25.153867  228555 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 18:16:25.153946  228555 addons.go:69] Setting storage-provisioner=true in profile "auto-078032"
	I1010 18:16:25.153960  228555 addons.go:238] Setting addon storage-provisioner=true in "auto-078032"
	I1010 18:16:25.153965  228555 config.go:182] Loaded profile config "auto-078032": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:16:25.153989  228555 host.go:66] Checking if "auto-078032" exists ...
	I1010 18:16:25.154018  228555 addons.go:69] Setting default-storageclass=true in profile "auto-078032"
	I1010 18:16:25.154036  228555 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-078032"
	I1010 18:16:25.155728  228555 cli_runner.go:164] Run: docker container inspect auto-078032 --format={{.State.Status}}
	I1010 18:16:25.156035  228555 cli_runner.go:164] Run: docker container inspect auto-078032 --format={{.State.Status}}
	I1010 18:16:25.156309  228555 out.go:179] * Verifying Kubernetes components...
	I1010 18:16:25.158605  228555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:16:25.183733  228555 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:16:24.508003  232364 pod_ready.go:94] pod "kube-controller-manager-pause-950227" is "Ready"
	I1010 18:16:24.508029  232364 pod_ready.go:86] duration metric: took 364.479336ms for pod "kube-controller-manager-pause-950227" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:16:24.708294  232364 pod_ready.go:83] waiting for pod "kube-proxy-w8m7g" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:16:25.107803  232364 pod_ready.go:94] pod "kube-proxy-w8m7g" is "Ready"
	I1010 18:16:25.107828  232364 pod_ready.go:86] duration metric: took 399.511064ms for pod "kube-proxy-w8m7g" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:16:25.308535  232364 pod_ready.go:83] waiting for pod "kube-scheduler-pause-950227" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:16:25.707712  232364 pod_ready.go:94] pod "kube-scheduler-pause-950227" is "Ready"
	I1010 18:16:25.707739  232364 pod_ready.go:86] duration metric: took 399.176196ms for pod "kube-scheduler-pause-950227" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:16:25.707754  232364 pod_ready.go:40] duration metric: took 1.604556393s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 18:16:25.765792  232364 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1010 18:16:25.767678  232364 out.go:179] * Done! kubectl is now configured to use "pause-950227" cluster and "default" namespace by default
	I1010 18:16:25.184881  228555 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:16:25.184901  228555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 18:16:25.184960  228555 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-078032
	I1010 18:16:25.185315  228555 addons.go:238] Setting addon default-storageclass=true in "auto-078032"
	I1010 18:16:25.185365  228555 host.go:66] Checking if "auto-078032" exists ...
	I1010 18:16:25.185846  228555 cli_runner.go:164] Run: docker container inspect auto-078032 --format={{.State.Status}}
	I1010 18:16:25.215841  228555 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/auto-078032/id_rsa Username:docker}
	I1010 18:16:25.219949  228555 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 18:16:25.220099  228555 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 18:16:25.220233  228555 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-078032
	I1010 18:16:25.241885  228555 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/auto-078032/id_rsa Username:docker}
	I1010 18:16:25.262352  228555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1010 18:16:25.323684  228555 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:16:25.443198  228555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:16:25.448348  228555 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1010 18:16:25.449521  228555 node_ready.go:35] waiting up to 15m0s for node "auto-078032" to be "Ready" ...
	I1010 18:16:25.462145  228555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 18:16:25.874306  228555 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1010 18:16:22.639526  211214 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1010 18:16:22.640029  211214 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1010 18:16:22.640146  211214 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 18:16:22.640220  211214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 18:16:22.672657  211214 cri.go:89] found id: "b4fa089486e03973c4c2741b3af93233cef2a97a2d050137fca4071a8bccec5b"
	I1010 18:16:22.672685  211214 cri.go:89] found id: ""
	I1010 18:16:22.672697  211214 logs.go:282] 1 containers: [b4fa089486e03973c4c2741b3af93233cef2a97a2d050137fca4071a8bccec5b]
	I1010 18:16:22.672761  211214 ssh_runner.go:195] Run: which crictl
	I1010 18:16:22.677545  211214 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 18:16:22.677608  211214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 18:16:22.710944  211214 cri.go:89] found id: ""
	I1010 18:16:22.710971  211214 logs.go:282] 0 containers: []
	W1010 18:16:22.710981  211214 logs.go:284] No container was found matching "etcd"
	I1010 18:16:22.710990  211214 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 18:16:22.711047  211214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 18:16:22.746189  211214 cri.go:89] found id: ""
	I1010 18:16:22.746212  211214 logs.go:282] 0 containers: []
	W1010 18:16:22.746218  211214 logs.go:284] No container was found matching "coredns"
	I1010 18:16:22.746225  211214 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 18:16:22.746282  211214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 18:16:22.778413  211214 cri.go:89] found id: "5afe55bc7d8aa7a1dc08648b839f114bcbeff6b16165dccaf57bb53c00a63eab"
	I1010 18:16:22.778465  211214 cri.go:89] found id: ""
	I1010 18:16:22.778475  211214 logs.go:282] 1 containers: [5afe55bc7d8aa7a1dc08648b839f114bcbeff6b16165dccaf57bb53c00a63eab]
	I1010 18:16:22.778571  211214 ssh_runner.go:195] Run: which crictl
	I1010 18:16:22.783230  211214 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 18:16:22.783309  211214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 18:16:22.816659  211214 cri.go:89] found id: ""
	I1010 18:16:22.816683  211214 logs.go:282] 0 containers: []
	W1010 18:16:22.816694  211214 logs.go:284] No container was found matching "kube-proxy"
	I1010 18:16:22.816701  211214 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 18:16:22.816761  211214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 18:16:22.857755  211214 cri.go:89] found id: "c6c0a05f5c63db90a4941c275b1cc9cb617ee1420fdaf9661d0f85037962c87c"
	I1010 18:16:22.857848  211214 cri.go:89] found id: ""
	I1010 18:16:22.857872  211214 logs.go:282] 1 containers: [c6c0a05f5c63db90a4941c275b1cc9cb617ee1420fdaf9661d0f85037962c87c]
	I1010 18:16:22.857953  211214 ssh_runner.go:195] Run: which crictl
	I1010 18:16:22.864603  211214 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 18:16:22.864674  211214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 18:16:22.902286  211214 cri.go:89] found id: ""
	I1010 18:16:22.902313  211214 logs.go:282] 0 containers: []
	W1010 18:16:22.902324  211214 logs.go:284] No container was found matching "kindnet"
	I1010 18:16:22.902330  211214 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1010 18:16:22.902391  211214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1010 18:16:22.933562  211214 cri.go:89] found id: ""
	I1010 18:16:22.933588  211214 logs.go:282] 0 containers: []
	W1010 18:16:22.933599  211214 logs.go:284] No container was found matching "storage-provisioner"
	I1010 18:16:22.933610  211214 logs.go:123] Gathering logs for kube-apiserver [b4fa089486e03973c4c2741b3af93233cef2a97a2d050137fca4071a8bccec5b] ...
	I1010 18:16:22.933624  211214 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b4fa089486e03973c4c2741b3af93233cef2a97a2d050137fca4071a8bccec5b"
	I1010 18:16:22.969657  211214 logs.go:123] Gathering logs for kube-scheduler [5afe55bc7d8aa7a1dc08648b839f114bcbeff6b16165dccaf57bb53c00a63eab] ...
	I1010 18:16:22.969680  211214 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5afe55bc7d8aa7a1dc08648b839f114bcbeff6b16165dccaf57bb53c00a63eab"
	I1010 18:16:23.020301  211214 logs.go:123] Gathering logs for kube-controller-manager [c6c0a05f5c63db90a4941c275b1cc9cb617ee1420fdaf9661d0f85037962c87c] ...
	I1010 18:16:23.020327  211214 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c6c0a05f5c63db90a4941c275b1cc9cb617ee1420fdaf9661d0f85037962c87c"
	I1010 18:16:23.048841  211214 logs.go:123] Gathering logs for CRI-O ...
	I1010 18:16:23.048880  211214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 18:16:23.095380  211214 logs.go:123] Gathering logs for container status ...
	I1010 18:16:23.095417  211214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 18:16:23.130278  211214 logs.go:123] Gathering logs for kubelet ...
	I1010 18:16:23.130307  211214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 18:16:23.207896  211214 logs.go:123] Gathering logs for dmesg ...
	I1010 18:16:23.207924  211214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 18:16:23.223767  211214 logs.go:123] Gathering logs for describe nodes ...
	I1010 18:16:23.223796  211214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 18:16:23.287372  211214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 18:16:25.788141  211214 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1010 18:16:25.788528  211214 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1010 18:16:25.788584  211214 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 18:16:25.788634  211214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 18:16:25.818705  211214 cri.go:89] found id: "b4fa089486e03973c4c2741b3af93233cef2a97a2d050137fca4071a8bccec5b"
	I1010 18:16:25.818724  211214 cri.go:89] found id: ""
	I1010 18:16:25.818731  211214 logs.go:282] 1 containers: [b4fa089486e03973c4c2741b3af93233cef2a97a2d050137fca4071a8bccec5b]
	I1010 18:16:25.818780  211214 ssh_runner.go:195] Run: which crictl
	I1010 18:16:25.822986  211214 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 18:16:25.823077  211214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 18:16:25.849674  211214 cri.go:89] found id: ""
	I1010 18:16:25.849700  211214 logs.go:282] 0 containers: []
	W1010 18:16:25.849710  211214 logs.go:284] No container was found matching "etcd"
	I1010 18:16:25.849717  211214 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 18:16:25.849772  211214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 18:16:25.886914  211214 cri.go:89] found id: ""
	I1010 18:16:25.886940  211214 logs.go:282] 0 containers: []
	W1010 18:16:25.886950  211214 logs.go:284] No container was found matching "coredns"
	I1010 18:16:25.886975  211214 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 18:16:25.887033  211214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 18:16:25.921185  211214 cri.go:89] found id: "5afe55bc7d8aa7a1dc08648b839f114bcbeff6b16165dccaf57bb53c00a63eab"
	I1010 18:16:25.921206  211214 cri.go:89] found id: ""
	I1010 18:16:25.921215  211214 logs.go:282] 1 containers: [5afe55bc7d8aa7a1dc08648b839f114bcbeff6b16165dccaf57bb53c00a63eab]
	I1010 18:16:25.921267  211214 ssh_runner.go:195] Run: which crictl
	I1010 18:16:25.926306  211214 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 18:16:25.926389  211214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 18:16:25.960249  211214 cri.go:89] found id: ""
	I1010 18:16:25.960270  211214 logs.go:282] 0 containers: []
	W1010 18:16:25.960277  211214 logs.go:284] No container was found matching "kube-proxy"
	I1010 18:16:25.960282  211214 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 18:16:25.960329  211214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 18:16:25.997999  211214 cri.go:89] found id: "c6c0a05f5c63db90a4941c275b1cc9cb617ee1420fdaf9661d0f85037962c87c"
	I1010 18:16:25.998024  211214 cri.go:89] found id: ""
	I1010 18:16:25.998035  211214 logs.go:282] 1 containers: [c6c0a05f5c63db90a4941c275b1cc9cb617ee1420fdaf9661d0f85037962c87c]
	I1010 18:16:25.998127  211214 ssh_runner.go:195] Run: which crictl
	I1010 18:16:26.004481  211214 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 18:16:26.004548  211214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 18:16:26.044464  211214 cri.go:89] found id: ""
	I1010 18:16:26.044494  211214 logs.go:282] 0 containers: []
	W1010 18:16:26.044504  211214 logs.go:284] No container was found matching "kindnet"
	I1010 18:16:26.044511  211214 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1010 18:16:26.044565  211214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1010 18:16:26.077041  211214 cri.go:89] found id: ""
	I1010 18:16:26.077080  211214 logs.go:282] 0 containers: []
	W1010 18:16:26.077090  211214 logs.go:284] No container was found matching "storage-provisioner"
	I1010 18:16:26.077100  211214 logs.go:123] Gathering logs for container status ...
	I1010 18:16:26.077114  211214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 18:16:26.114528  211214 logs.go:123] Gathering logs for kubelet ...
	I1010 18:16:26.114558  211214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 18:16:26.193104  211214 logs.go:123] Gathering logs for dmesg ...
	I1010 18:16:26.193142  211214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 18:16:26.209973  211214 logs.go:123] Gathering logs for describe nodes ...
	I1010 18:16:26.209998  211214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 18:16:26.282309  211214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 18:16:26.282328  211214 logs.go:123] Gathering logs for kube-apiserver [b4fa089486e03973c4c2741b3af93233cef2a97a2d050137fca4071a8bccec5b] ...
	I1010 18:16:26.282344  211214 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b4fa089486e03973c4c2741b3af93233cef2a97a2d050137fca4071a8bccec5b"
	I1010 18:16:26.318285  211214 logs.go:123] Gathering logs for kube-scheduler [5afe55bc7d8aa7a1dc08648b839f114bcbeff6b16165dccaf57bb53c00a63eab] ...
	I1010 18:16:26.318319  211214 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5afe55bc7d8aa7a1dc08648b839f114bcbeff6b16165dccaf57bb53c00a63eab"
	I1010 18:16:26.374253  211214 logs.go:123] Gathering logs for kube-controller-manager [c6c0a05f5c63db90a4941c275b1cc9cb617ee1420fdaf9661d0f85037962c87c] ...
	I1010 18:16:26.374290  211214 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c6c0a05f5c63db90a4941c275b1cc9cb617ee1420fdaf9661d0f85037962c87c"
	I1010 18:16:26.406787  211214 logs.go:123] Gathering logs for CRI-O ...
	I1010 18:16:26.406819  211214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	
	
	==> CRI-O <==
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.581509556Z" level=info msg="RDT not available in the host system"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.581524066Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.582406681Z" level=info msg="Conmon does support the --sync option"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.582426963Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.582439545Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.583421347Z" level=info msg="Conmon does support the --sync option"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.583444526Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.587847446Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.58788091Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.588643366Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = true\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/c
ni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/
var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.5892098Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.589272054Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.672006098Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-xnz7w Namespace:kube-system ID:47c2f6d046bf9b2f1dff8dbc1f2cd07b3c36d042e75c8eac5489c12ba3374f43 UID:6f50a9c5-d774-4c7e-b06e-e8d4224997f3 NetNS:/var/run/netns/11f1a069-d990-4f26-be9a-1a39e7344f6a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000280110}] Aliases:map[]}"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.672270188Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-xnz7w for CNI network kindnet (type=ptp)"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.672824955Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.672856526Z" level=info msg="Starting seccomp notifier watcher"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.672929401Z" level=info msg="Create NRI interface"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.673105292Z" level=info msg="built-in NRI default validator is disabled"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.67312458Z" level=info msg="runtime interface created"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.673139025Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.67314723Z" level=info msg="runtime interface starting up..."
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.673155302Z" level=info msg="starting plugins..."
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.673171932Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.673567133Z" level=info msg="No systemd watchdog enabled"
	Oct 10 18:16:22 pause-950227 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	24d3727a1caf7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   12 seconds ago      Running             coredns                   0                   47c2f6d046bf9       coredns-66bc5c9577-xnz7w               kube-system
	138558857f353       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   23 seconds ago      Running             kube-proxy                0                   0866eab019ace       kube-proxy-w8m7g                       kube-system
	920f58080664d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   23 seconds ago      Running             kindnet-cni               0                   530fc39af041e       kindnet-hltxf                          kube-system
	902554105d2e2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   33 seconds ago      Running             kube-controller-manager   0                   9bb76ced1a8b8       kube-controller-manager-pause-950227   kube-system
	9348a460aca77       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   33 seconds ago      Running             kube-apiserver            0                   4cff3366857d4       kube-apiserver-pause-950227            kube-system
	56bf019745e9d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   33 seconds ago      Running             kube-scheduler            0                   dd2cd6c4acf87       kube-scheduler-pause-950227            kube-system
	8c4a8297dbdf5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   33 seconds ago      Running             etcd                      0                   d5f4b3d7f4f4e       etcd-pause-950227                      kube-system
	
	
	==> coredns [24d3727a1caf7ea8454b0a63879e3ae97f0345d7c4584d8bfb3f63e99a305463] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37653 - 64676 "HINFO IN 1098477055154410395.5216215930617005721. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027506691s
	
	
	==> describe nodes <==
	Name:               pause-950227
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-950227
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad692bf4ab89f0e135b80e730ae25010479ecc46
	                    minikube.k8s.io/name=pause-950227
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_10T18_16_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 10 Oct 2025 18:15:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-950227
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 10 Oct 2025 18:16:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 10 Oct 2025 18:16:15 +0000   Fri, 10 Oct 2025 18:15:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 10 Oct 2025 18:16:15 +0000   Fri, 10 Oct 2025 18:15:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 10 Oct 2025 18:16:15 +0000   Fri, 10 Oct 2025 18:15:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 10 Oct 2025 18:16:15 +0000   Fri, 10 Oct 2025 18:16:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    pause-950227
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 6694834041ede3e9eb1b67e168e90e0c
	  System UUID:                b87f1f41-ac56-40e3-a62d-35e38f5dc50c
	  Boot ID:                    830c8438-99e6-48ba-b543-66e651cad0c8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-xnz7w                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-pause-950227                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-hltxf                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-pause-950227             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-pause-950227    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-w8m7g                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-pause-950227             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node pause-950227 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node pause-950227 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node pause-950227 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node pause-950227 event: Registered Node pause-950227 in Controller
	  Normal  NodeReady                14s   kubelet          Node pause-950227 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.077121] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.021628] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.602398] kauditd_printk_skb: 47 callbacks suppressed
	[Oct10 17:33] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[  +1.057549] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[  +1.023904] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[  +1.023945] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[  +1.024888] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[  +1.022912] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[  +2.047862] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[  +4.031726] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[  +8.191358] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[ +16.382802] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[Oct10 17:34] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	
	
	==> etcd [8c4a8297dbdf5c9c125d0dca45d097f2180144b13b90ba4a003137b26d6d6b77] <==
	{"level":"info","ts":"2025-10-10T18:16:03.798744Z","caller":"traceutil/trace.go:172","msg":"trace[105001392] transaction","detail":"{read_only:false; response_revision:296; number_of_response:1; }","duration":"163.885598ms","start":"2025-10-10T18:16:03.634848Z","end":"2025-10-10T18:16:03.798734Z","steps":["trace[105001392] 'process raft request'  (duration: 163.621611ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-10T18:16:03.798602Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"164.210405ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" limit:1 ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2025-10-10T18:16:03.798926Z","caller":"traceutil/trace.go:172","msg":"trace[1804356506] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:295; }","duration":"164.552464ms","start":"2025-10-10T18:16:03.634363Z","end":"2025-10-10T18:16:03.798915Z","steps":["trace[1804356506] 'agreement among raft nodes before linearized reading'  (duration: 164.12408ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-10T18:16:03.798722Z","caller":"traceutil/trace.go:172","msg":"trace[897797819] transaction","detail":"{read_only:false; response_revision:297; number_of_response:1; }","duration":"118.612546ms","start":"2025-10-10T18:16:03.680097Z","end":"2025-10-10T18:16:03.798709Z","steps":["trace[897797819] 'process raft request'  (duration: 118.475128ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-10T18:16:04.093305Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"163.311931ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/legacy-service-account-token-cleaner\" limit:1 ","response":"range_response_count:1 size:238"}
	{"level":"info","ts":"2025-10-10T18:16:04.093380Z","caller":"traceutil/trace.go:172","msg":"trace[2023904747] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/legacy-service-account-token-cleaner; range_end:; response_count:1; response_revision:297; }","duration":"163.403987ms","start":"2025-10-10T18:16:03.929959Z","end":"2025-10-10T18:16:04.093363Z","steps":["trace[2023904747] 'range keys from in-memory index tree'  (duration: 163.195271ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-10T18:16:04.093305Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.39552ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller\" limit:1 ","response":"range_response_count:1 size:234"}
	{"level":"info","ts":"2025-10-10T18:16:04.093480Z","caller":"traceutil/trace.go:172","msg":"trace[631194643] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller; range_end:; response_count:1; response_revision:297; }","duration":"113.576369ms","start":"2025-10-10T18:16:03.979887Z","end":"2025-10-10T18:16:04.093463Z","steps":["trace[631194643] 'range keys from in-memory index tree'  (duration: 113.273156ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-10T18:16:04.093305Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"214.266791ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/root-ca-cert-publisher\" limit:1 ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2025-10-10T18:16:04.093567Z","caller":"traceutil/trace.go:172","msg":"trace[2081601603] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/root-ca-cert-publisher; range_end:; response_count:1; response_revision:297; }","duration":"214.528523ms","start":"2025-10-10T18:16:03.879027Z","end":"2025-10-10T18:16:04.093556Z","steps":["trace[2081601603] 'range keys from in-memory index tree'  (duration: 214.157971ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-10T18:16:04.335332Z","caller":"traceutil/trace.go:172","msg":"trace[1098345010] linearizableReadLoop","detail":"{readStateIndex:309; appliedIndex:309; }","duration":"105.514482ms","start":"2025-10-10T18:16:04.229797Z","end":"2025-10-10T18:16:04.335311Z","steps":["trace[1098345010] 'read index received'  (duration: 105.505543ms)","trace[1098345010] 'applied index is now lower than readState.Index'  (duration: 7.613µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-10T18:16:04.486024Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"256.205684ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" limit:1 ","response":"range_response_count:1 size:203"}
	{"level":"warn","ts":"2025-10-10T18:16:04.486108Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"150.643756ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571765397046624546 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" mod_revision:0 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" value_size:3864 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-10-10T18:16:04.486117Z","caller":"traceutil/trace.go:172","msg":"trace[136455759] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpoint-controller; range_end:; response_count:1; response_revision:299; }","duration":"256.313135ms","start":"2025-10-10T18:16:04.229787Z","end":"2025-10-10T18:16:04.486100Z","steps":["trace[136455759] 'agreement among raft nodes before linearized reading'  (duration: 105.613472ms)","trace[136455759] 'range keys from in-memory index tree'  (duration: 150.485802ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-10T18:16:04.486263Z","caller":"traceutil/trace.go:172","msg":"trace[20112571] transaction","detail":"{read_only:false; response_revision:301; number_of_response:1; }","duration":"281.902741ms","start":"2025-10-10T18:16:04.204352Z","end":"2025-10-10T18:16:04.486255Z","steps":["trace[20112571] 'process raft request'  (duration: 281.826127ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-10T18:16:04.486315Z","caller":"traceutil/trace.go:172","msg":"trace[155861281] linearizableReadLoop","detail":"{readStateIndex:310; appliedIndex:309; }","duration":"150.912024ms","start":"2025-10-10T18:16:04.335386Z","end":"2025-10-10T18:16:04.486298Z","steps":["trace[155861281] 'read index received'  (duration: 1.000837ms)","trace[155861281] 'applied index is now lower than readState.Index'  (duration: 149.909615ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-10T18:16:04.486405Z","caller":"traceutil/trace.go:172","msg":"trace[1540939540] transaction","detail":"{read_only:false; response_revision:300; number_of_response:1; }","duration":"283.244136ms","start":"2025-10-10T18:16:04.203150Z","end":"2025-10-10T18:16:04.486395Z","steps":["trace[1540939540] 'process raft request'  (duration: 132.244458ms)","trace[1540939540] 'compare'  (duration: 150.515006ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-10T18:16:04.486470Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"217.896642ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-10T18:16:04.486500Z","caller":"traceutil/trace.go:172","msg":"trace[1097153433] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:301; }","duration":"217.932964ms","start":"2025-10-10T18:16:04.268560Z","end":"2025-10-10T18:16:04.486493Z","steps":["trace[1097153433] 'agreement among raft nodes before linearized reading'  (duration: 217.871064ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-10T18:16:04.486662Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"207.518263ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/ttl-controller\" limit:1 ","response":"range_response_count:1 size:193"}
	{"level":"info","ts":"2025-10-10T18:16:04.486691Z","caller":"traceutil/trace.go:172","msg":"trace[146418787] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/ttl-controller; range_end:; response_count:1; response_revision:301; }","duration":"207.552407ms","start":"2025-10-10T18:16:04.279131Z","end":"2025-10-10T18:16:04.486683Z","steps":["trace[146418787] 'agreement among raft nodes before linearized reading'  (duration: 207.453781ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-10T18:16:04.486706Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"157.594077ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/daemon-set-controller\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"warn","ts":"2025-10-10T18:16:04.486662Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"106.615128ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/service-account-controller\" limit:1 ","response":"range_response_count:1 size:218"}
	{"level":"info","ts":"2025-10-10T18:16:04.486747Z","caller":"traceutil/trace.go:172","msg":"trace[1164448351] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/daemon-set-controller; range_end:; response_count:1; response_revision:301; }","duration":"157.627543ms","start":"2025-10-10T18:16:04.329100Z","end":"2025-10-10T18:16:04.486728Z","steps":["trace[1164448351] 'agreement among raft nodes before linearized reading'  (duration: 157.539157ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-10T18:16:04.486766Z","caller":"traceutil/trace.go:172","msg":"trace[1006779037] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-account-controller; range_end:; response_count:1; response_revision:301; }","duration":"106.723357ms","start":"2025-10-10T18:16:04.380034Z","end":"2025-10-10T18:16:04.486758Z","steps":["trace[1006779037] 'agreement among raft nodes before linearized reading'  (duration: 106.544544ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:16:30 up 58 min,  0 user,  load average: 3.23, 2.67, 1.83
	Linux pause-950227 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [920f58080664d2c8c58c355a2253b0d4613573151a9472f5391f0855e086f433] <==
	I1010 18:16:05.135959       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1010 18:16:05.229573       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1010 18:16:05.229745       1 main.go:148] setting mtu 1500 for CNI 
	I1010 18:16:05.229763       1 main.go:178] kindnetd IP family: "ipv4"
	I1010 18:16:05.229789       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-10T18:16:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1010 18:16:05.434112       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1010 18:16:05.434221       1 controller.go:381] "Waiting for informer caches to sync"
	I1010 18:16:05.434241       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1010 18:16:05.434381       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1010 18:16:05.834700       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1010 18:16:05.834742       1 metrics.go:72] Registering metrics
	I1010 18:16:05.834876       1 controller.go:711] "Syncing nftables rules"
	I1010 18:16:15.436187       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1010 18:16:15.436261       1 main.go:301] handling current node
	I1010 18:16:25.441152       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1010 18:16:25.441192       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9348a460aca77374c1b39c27458b01cafa91171ea46cc13c27783556839e5407] <==
	I1010 18:15:56.572546       1 policy_source.go:240] refreshing policies
	E1010 18:15:56.616015       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1010 18:15:56.663687       1 controller.go:667] quota admission added evaluator for: namespaces
	I1010 18:15:56.669309       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1010 18:15:56.669325       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1010 18:15:56.675446       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1010 18:15:56.676297       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1010 18:15:56.752494       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1010 18:15:57.465839       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1010 18:15:57.469452       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1010 18:15:57.469472       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1010 18:15:57.903570       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1010 18:15:57.938914       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1010 18:15:58.071437       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1010 18:15:58.079924       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1010 18:15:58.081224       1 controller.go:667] quota admission added evaluator for: endpoints
	I1010 18:15:58.085776       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1010 18:15:58.482047       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1010 18:15:59.211657       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1010 18:15:59.221366       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1010 18:15:59.231081       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1010 18:16:04.197419       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1010 18:16:04.201773       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1010 18:16:04.202516       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1010 18:16:04.494355       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [902554105d2e25376fdf24065c080a718c0248dccbf615a635299f9a9f6aa896] <==
	I1010 18:16:03.643818       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1010 18:16:03.667123       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1010 18:16:03.676401       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1010 18:16:03.676501       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1010 18:16:03.676553       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1010 18:16:03.676560       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1010 18:16:03.676567       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1010 18:16:03.680775       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1010 18:16:03.682000       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1010 18:16:03.682137       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1010 18:16:03.682178       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1010 18:16:03.683306       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1010 18:16:03.683321       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1010 18:16:03.683332       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1010 18:16:03.683355       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1010 18:16:03.683358       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1010 18:16:03.683404       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1010 18:16:03.683428       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1010 18:16:03.683480       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1010 18:16:03.685964       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1010 18:16:03.688183       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1010 18:16:03.698402       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1010 18:16:03.708633       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1010 18:16:03.800564       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-950227" podCIDRs=["10.244.0.0/24"]
	I1010 18:16:18.634867       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [138558857f353346de22e898bff08dda71fb11cc165efe7dc8b3858d37a9fa30] <==
	I1010 18:16:05.019683       1 server_linux.go:53] "Using iptables proxy"
	I1010 18:16:05.093128       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1010 18:16:05.193840       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1010 18:16:05.193887       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1010 18:16:05.193997       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1010 18:16:05.219151       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1010 18:16:05.219319       1 server_linux.go:132] "Using iptables Proxier"
	I1010 18:16:05.227328       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1010 18:16:05.227864       1 server.go:527] "Version info" version="v1.34.1"
	I1010 18:16:05.228352       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 18:16:05.230203       1 config.go:200] "Starting service config controller"
	I1010 18:16:05.231152       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1010 18:16:05.230228       1 config.go:309] "Starting node config controller"
	I1010 18:16:05.231228       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1010 18:16:05.231239       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1010 18:16:05.230458       1 config.go:403] "Starting serviceCIDR config controller"
	I1010 18:16:05.231247       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1010 18:16:05.230441       1 config.go:106] "Starting endpoint slice config controller"
	I1010 18:16:05.231281       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1010 18:16:05.331722       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1010 18:16:05.331799       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1010 18:16:05.331849       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [56bf019745e9db0c23a0f3e53dc0302edd1929504a7cdaec4c606f9db01934cd] <==
	E1010 18:15:56.515207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1010 18:15:56.515271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1010 18:15:56.515331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1010 18:15:56.515342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1010 18:15:56.515385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1010 18:15:56.515443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1010 18:15:56.515512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1010 18:15:56.515608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1010 18:15:56.515703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1010 18:15:56.515734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1010 18:15:56.515914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1010 18:15:56.515973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1010 18:15:56.515984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1010 18:15:56.515986       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1010 18:15:56.516110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1010 18:15:56.516127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1010 18:15:57.348932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1010 18:15:57.436465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1010 18:15:57.503754       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1010 18:15:57.580298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1010 18:15:57.675597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1010 18:15:57.695675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1010 18:15:57.704618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1010 18:15:57.747159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1010 18:15:58.110853       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 10 18:16:00 pause-950227 kubelet[1333]: E1010 18:16:00.062692    1333 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-950227\" already exists" pod="kube-system/kube-scheduler-pause-950227"
	Oct 10 18:16:00 pause-950227 kubelet[1333]: I1010 18:16:00.080937    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-950227" podStartSLOduration=1.0809137739999999 podStartE2EDuration="1.080913774s" podCreationTimestamp="2025-10-10 18:15:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:16:00.080844289 +0000 UTC m=+1.132874987" watchObservedRunningTime="2025-10-10 18:16:00.080913774 +0000 UTC m=+1.132944472"
	Oct 10 18:16:00 pause-950227 kubelet[1333]: I1010 18:16:00.093282    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-950227" podStartSLOduration=1.093259203 podStartE2EDuration="1.093259203s" podCreationTimestamp="2025-10-10 18:15:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:16:00.092044572 +0000 UTC m=+1.144075308" watchObservedRunningTime="2025-10-10 18:16:00.093259203 +0000 UTC m=+1.145289943"
	Oct 10 18:16:00 pause-950227 kubelet[1333]: I1010 18:16:00.104366    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-950227" podStartSLOduration=1.10434997 podStartE2EDuration="1.10434997s" podCreationTimestamp="2025-10-10 18:15:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:16:00.104297546 +0000 UTC m=+1.156328231" watchObservedRunningTime="2025-10-10 18:16:00.10434997 +0000 UTC m=+1.156380670"
	Oct 10 18:16:00 pause-950227 kubelet[1333]: I1010 18:16:00.129484    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-950227" podStartSLOduration=2.129461424 podStartE2EDuration="2.129461424s" podCreationTimestamp="2025-10-10 18:15:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:16:00.116542355 +0000 UTC m=+1.168573052" watchObservedRunningTime="2025-10-10 18:16:00.129461424 +0000 UTC m=+1.181492122"
	Oct 10 18:16:03 pause-950227 kubelet[1333]: I1010 18:16:03.871556    1333 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 10 18:16:03 pause-950227 kubelet[1333]: I1010 18:16:03.872311    1333 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 10 18:16:04 pause-950227 kubelet[1333]: I1010 18:16:04.557154    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88c885b6-4002-42aa-a45a-5c6d2642d35e-xtables-lock\") pod \"kube-proxy-w8m7g\" (UID: \"88c885b6-4002-42aa-a45a-5c6d2642d35e\") " pod="kube-system/kube-proxy-w8m7g"
	Oct 10 18:16:04 pause-950227 kubelet[1333]: I1010 18:16:04.557223    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5cb1802a-3c55-4ae2-8fb9-4652ee01853a-lib-modules\") pod \"kindnet-hltxf\" (UID: \"5cb1802a-3c55-4ae2-8fb9-4652ee01853a\") " pod="kube-system/kindnet-hltxf"
	Oct 10 18:16:04 pause-950227 kubelet[1333]: I1010 18:16:04.557253    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46hb2\" (UniqueName: \"kubernetes.io/projected/5cb1802a-3c55-4ae2-8fb9-4652ee01853a-kube-api-access-46hb2\") pod \"kindnet-hltxf\" (UID: \"5cb1802a-3c55-4ae2-8fb9-4652ee01853a\") " pod="kube-system/kindnet-hltxf"
	Oct 10 18:16:04 pause-950227 kubelet[1333]: I1010 18:16:04.557287    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/88c885b6-4002-42aa-a45a-5c6d2642d35e-kube-proxy\") pod \"kube-proxy-w8m7g\" (UID: \"88c885b6-4002-42aa-a45a-5c6d2642d35e\") " pod="kube-system/kube-proxy-w8m7g"
	Oct 10 18:16:04 pause-950227 kubelet[1333]: I1010 18:16:04.557307    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5cb1802a-3c55-4ae2-8fb9-4652ee01853a-cni-cfg\") pod \"kindnet-hltxf\" (UID: \"5cb1802a-3c55-4ae2-8fb9-4652ee01853a\") " pod="kube-system/kindnet-hltxf"
	Oct 10 18:16:04 pause-950227 kubelet[1333]: I1010 18:16:04.557385    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88c885b6-4002-42aa-a45a-5c6d2642d35e-lib-modules\") pod \"kube-proxy-w8m7g\" (UID: \"88c885b6-4002-42aa-a45a-5c6d2642d35e\") " pod="kube-system/kube-proxy-w8m7g"
	Oct 10 18:16:04 pause-950227 kubelet[1333]: I1010 18:16:04.557426    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f98p\" (UniqueName: \"kubernetes.io/projected/88c885b6-4002-42aa-a45a-5c6d2642d35e-kube-api-access-4f98p\") pod \"kube-proxy-w8m7g\" (UID: \"88c885b6-4002-42aa-a45a-5c6d2642d35e\") " pod="kube-system/kube-proxy-w8m7g"
	Oct 10 18:16:04 pause-950227 kubelet[1333]: I1010 18:16:04.557873    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5cb1802a-3c55-4ae2-8fb9-4652ee01853a-xtables-lock\") pod \"kindnet-hltxf\" (UID: \"5cb1802a-3c55-4ae2-8fb9-4652ee01853a\") " pod="kube-system/kindnet-hltxf"
	Oct 10 18:16:05 pause-950227 kubelet[1333]: I1010 18:16:05.097984    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-w8m7g" podStartSLOduration=1.097932725 podStartE2EDuration="1.097932725s" podCreationTimestamp="2025-10-10 18:16:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:16:05.097897336 +0000 UTC m=+6.149928035" watchObservedRunningTime="2025-10-10 18:16:05.097932725 +0000 UTC m=+6.149963418"
	Oct 10 18:16:05 pause-950227 kubelet[1333]: I1010 18:16:05.098189    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-hltxf" podStartSLOduration=1.09817481 podStartE2EDuration="1.09817481s" podCreationTimestamp="2025-10-10 18:16:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:16:05.086061806 +0000 UTC m=+6.138092492" watchObservedRunningTime="2025-10-10 18:16:05.09817481 +0000 UTC m=+6.150205508"
	Oct 10 18:16:15 pause-950227 kubelet[1333]: I1010 18:16:15.826842    1333 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 10 18:16:15 pause-950227 kubelet[1333]: I1010 18:16:15.948311    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f50a9c5-d774-4c7e-b06e-e8d4224997f3-config-volume\") pod \"coredns-66bc5c9577-xnz7w\" (UID: \"6f50a9c5-d774-4c7e-b06e-e8d4224997f3\") " pod="kube-system/coredns-66bc5c9577-xnz7w"
	Oct 10 18:16:15 pause-950227 kubelet[1333]: I1010 18:16:15.948381    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5c4q\" (UniqueName: \"kubernetes.io/projected/6f50a9c5-d774-4c7e-b06e-e8d4224997f3-kube-api-access-j5c4q\") pod \"coredns-66bc5c9577-xnz7w\" (UID: \"6f50a9c5-d774-4c7e-b06e-e8d4224997f3\") " pod="kube-system/coredns-66bc5c9577-xnz7w"
	Oct 10 18:16:17 pause-950227 kubelet[1333]: I1010 18:16:17.114166    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-xnz7w" podStartSLOduration=13.114141216 podStartE2EDuration="13.114141216s" podCreationTimestamp="2025-10-10 18:16:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:16:17.113954779 +0000 UTC m=+18.165985478" watchObservedRunningTime="2025-10-10 18:16:17.114141216 +0000 UTC m=+18.166171914"
	Oct 10 18:16:26 pause-950227 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 10 18:16:26 pause-950227 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 10 18:16:26 pause-950227 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 10 18:16:26 pause-950227 systemd[1]: kubelet.service: Consumed 1.269s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-950227 -n pause-950227
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-950227 -n pause-950227: exit status 2 (336.026442ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-950227 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-950227
helpers_test.go:243: (dbg) docker inspect pause-950227:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ad3e8dca8f39697e1fdfe876fecc5a1c601c7d4357eda580195531ba0687de27",
	        "Created": "2025-10-10T18:15:43.854047944Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 225710,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-10T18:15:43.891455702Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:84da1fc78d37190122f56c520913b0bfc454516bc5fdbdc209e2a5258afce8c3",
	        "ResolvConfPath": "/var/lib/docker/containers/ad3e8dca8f39697e1fdfe876fecc5a1c601c7d4357eda580195531ba0687de27/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ad3e8dca8f39697e1fdfe876fecc5a1c601c7d4357eda580195531ba0687de27/hostname",
	        "HostsPath": "/var/lib/docker/containers/ad3e8dca8f39697e1fdfe876fecc5a1c601c7d4357eda580195531ba0687de27/hosts",
	        "LogPath": "/var/lib/docker/containers/ad3e8dca8f39697e1fdfe876fecc5a1c601c7d4357eda580195531ba0687de27/ad3e8dca8f39697e1fdfe876fecc5a1c601c7d4357eda580195531ba0687de27-json.log",
	        "Name": "/pause-950227",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-950227:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-950227",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ad3e8dca8f39697e1fdfe876fecc5a1c601c7d4357eda580195531ba0687de27",
	                "LowerDir": "/var/lib/docker/overlay2/63bf4903969eefa14bf1d7feb584ba7169bc1df65299bea119d2942b1efb7d66-init/diff:/var/lib/docker/overlay2/9995a0af7efc4d83e8e62526a6cf13ffc5df3bab5cee59077c863040f7e3e58d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/63bf4903969eefa14bf1d7feb584ba7169bc1df65299bea119d2942b1efb7d66/merged",
	                "UpperDir": "/var/lib/docker/overlay2/63bf4903969eefa14bf1d7feb584ba7169bc1df65299bea119d2942b1efb7d66/diff",
	                "WorkDir": "/var/lib/docker/overlay2/63bf4903969eefa14bf1d7feb584ba7169bc1df65299bea119d2942b1efb7d66/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-950227",
	                "Source": "/var/lib/docker/volumes/pause-950227/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-950227",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-950227",
	                "name.minikube.sigs.k8s.io": "pause-950227",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5a538a6e09a319f165e67b26072992d998e4e21a2787143f9abe22a69f33c93f",
	            "SandboxKey": "/var/run/docker/netns/5a538a6e09a3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33048"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33050"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-950227": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:c2:8c:8d:17:dd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "75c955ba1b2d3ad658da9f6166da0611207c4d1565854365404b27a7e67636ba",
	                    "EndpointID": "c852e9482c55f8b4a92a59ffcb8dc707c7f639a85bced49b0b3c677993c3238b",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-950227",
	                        "ad3e8dca8f39"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-950227 -n pause-950227
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-950227 -n pause-950227: exit status 2 (352.863048ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-950227 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p NoKubernetes-444917                                                                                                                   │ NoKubernetes-444917       │ jenkins │ v1.37.0 │ 10 Oct 25 18:13 UTC │ 10 Oct 25 18:13 UTC │
	│ ssh     │ cert-options-594273 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                              │ cert-options-594273       │ jenkins │ v1.37.0 │ 10 Oct 25 18:13 UTC │ 10 Oct 25 18:13 UTC │
	│ ssh     │ -p cert-options-594273 -- sudo cat /etc/kubernetes/admin.conf                                                                            │ cert-options-594273       │ jenkins │ v1.37.0 │ 10 Oct 25 18:13 UTC │ 10 Oct 25 18:13 UTC │
	│ delete  │ -p cert-options-594273                                                                                                                   │ cert-options-594273       │ jenkins │ v1.37.0 │ 10 Oct 25 18:13 UTC │ 10 Oct 25 18:13 UTC │
	│ delete  │ -p offline-crio-416783                                                                                                                   │ offline-crio-416783       │ jenkins │ v1.37.0 │ 10 Oct 25 18:13 UTC │ 10 Oct 25 18:13 UTC │
	│ start   │ -p kubernetes-upgrade-274910 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-274910 │ jenkins │ v1.37.0 │ 10 Oct 25 18:13 UTC │ 10 Oct 25 18:14 UTC │
	│ start   │ -p stopped-upgrade-839433 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-839433    │ jenkins │ v1.32.0 │ 10 Oct 25 18:14 UTC │ 10 Oct 25 18:14 UTC │
	│ start   │ -p missing-upgrade-085473 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-085473    │ jenkins │ v1.32.0 │ 10 Oct 25 18:14 UTC │ 10 Oct 25 18:14 UTC │
	│ stop    │ -p kubernetes-upgrade-274910                                                                                                             │ kubernetes-upgrade-274910 │ jenkins │ v1.37.0 │ 10 Oct 25 18:14 UTC │ 10 Oct 25 18:14 UTC │
	│ start   │ -p kubernetes-upgrade-274910 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-274910 │ jenkins │ v1.37.0 │ 10 Oct 25 18:14 UTC │                     │
	│ stop    │ stopped-upgrade-839433 stop                                                                                                              │ stopped-upgrade-839433    │ jenkins │ v1.32.0 │ 10 Oct 25 18:14 UTC │ 10 Oct 25 18:14 UTC │
	│ start   │ -p missing-upgrade-085473 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-085473    │ jenkins │ v1.37.0 │ 10 Oct 25 18:14 UTC │ 10 Oct 25 18:15 UTC │
	│ start   │ -p stopped-upgrade-839433 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-839433    │ jenkins │ v1.37.0 │ 10 Oct 25 18:14 UTC │ 10 Oct 25 18:15 UTC │
	│ delete  │ -p stopped-upgrade-839433                                                                                                                │ stopped-upgrade-839433    │ jenkins │ v1.37.0 │ 10 Oct 25 18:15 UTC │ 10 Oct 25 18:15 UTC │
	│ start   │ -p running-upgrade-390393 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-390393    │ jenkins │ v1.32.0 │ 10 Oct 25 18:15 UTC │ 10 Oct 25 18:15 UTC │
	│ start   │ -p running-upgrade-390393 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-390393    │ jenkins │ v1.37.0 │ 10 Oct 25 18:15 UTC │ 10 Oct 25 18:15 UTC │
	│ delete  │ -p missing-upgrade-085473                                                                                                                │ missing-upgrade-085473    │ jenkins │ v1.37.0 │ 10 Oct 25 18:15 UTC │ 10 Oct 25 18:15 UTC │
	│ start   │ -p pause-950227 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-950227              │ jenkins │ v1.37.0 │ 10 Oct 25 18:15 UTC │ 10 Oct 25 18:16 UTC │
	│ delete  │ -p running-upgrade-390393                                                                                                                │ running-upgrade-390393    │ jenkins │ v1.37.0 │ 10 Oct 25 18:15 UTC │ 10 Oct 25 18:15 UTC │
	│ start   │ -p auto-078032 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                  │ auto-078032               │ jenkins │ v1.37.0 │ 10 Oct 25 18:15 UTC │                     │
	│ start   │ -p cert-expiration-770491 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                │ cert-expiration-770491    │ jenkins │ v1.37.0 │ 10 Oct 25 18:16 UTC │ 10 Oct 25 18:16 UTC │
	│ start   │ -p pause-950227 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-950227              │ jenkins │ v1.37.0 │ 10 Oct 25 18:16 UTC │ 10 Oct 25 18:16 UTC │
	│ delete  │ -p cert-expiration-770491                                                                                                                │ cert-expiration-770491    │ jenkins │ v1.37.0 │ 10 Oct 25 18:16 UTC │ 10 Oct 25 18:16 UTC │
	│ start   │ -p kindnet-078032 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio │ kindnet-078032            │ jenkins │ v1.37.0 │ 10 Oct 25 18:16 UTC │                     │
	│ pause   │ -p pause-950227 --alsologtostderr -v=5                                                                                                   │ pause-950227              │ jenkins │ v1.37.0 │ 10 Oct 25 18:16 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/10 18:16:24
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 18:16:24.892923  234668 out.go:360] Setting OutFile to fd 1 ...
	I1010 18:16:24.893229  234668 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:16:24.893239  234668 out.go:374] Setting ErrFile to fd 2...
	I1010 18:16:24.893246  234668 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:16:24.893469  234668 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 18:16:24.893933  234668 out.go:368] Setting JSON to false
	I1010 18:16:24.895142  234668 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3525,"bootTime":1760116660,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 18:16:24.895228  234668 start.go:141] virtualization: kvm guest
	I1010 18:16:24.897072  234668 out.go:179] * [kindnet-078032] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1010 18:16:24.898234  234668 out.go:179]   - MINIKUBE_LOCATION=21724
	I1010 18:16:24.898244  234668 notify.go:220] Checking for updates...
	I1010 18:16:24.901267  234668 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 18:16:24.902398  234668 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:16:24.903578  234668 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-5815/.minikube
	I1010 18:16:24.904595  234668 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 18:16:24.905424  234668 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 18:16:24.907274  234668 config.go:182] Loaded profile config "auto-078032": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:16:24.907406  234668 config.go:182] Loaded profile config "kubernetes-upgrade-274910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:16:24.907512  234668 config.go:182] Loaded profile config "pause-950227": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:16:24.907598  234668 driver.go:421] Setting default libvirt URI to qemu:///system
	I1010 18:16:24.931545  234668 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1010 18:16:24.931623  234668 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:16:24.992630  234668 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-10 18:16:24.981804112 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:16:24.992780  234668 docker.go:318] overlay module found
	I1010 18:16:24.994397  234668 out.go:179] * Using the docker driver based on user configuration
	I1010 18:16:24.995555  234668 start.go:305] selected driver: docker
	I1010 18:16:24.995573  234668 start.go:925] validating driver "docker" against <nil>
	I1010 18:16:24.995589  234668 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 18:16:24.996197  234668 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:16:25.064808  234668 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-10 18:16:25.055143118 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:16:25.064942  234668 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1010 18:16:25.065167  234668 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:16:25.066893  234668 out.go:179] * Using Docker driver with root privileges
	I1010 18:16:25.068020  234668 cni.go:84] Creating CNI manager for "kindnet"
	I1010 18:16:25.068034  234668 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1010 18:16:25.068102  234668 start.go:349] cluster config:
	{Name:kindnet-078032 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-078032 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:16:25.069316  234668 out.go:179] * Starting "kindnet-078032" primary control-plane node in "kindnet-078032" cluster
	I1010 18:16:25.070371  234668 cache.go:123] Beginning downloading kic base image for docker with crio
	I1010 18:16:25.071766  234668 out.go:179] * Pulling base image v0.0.48-1760103811-21724 ...
	I1010 18:16:25.072848  234668 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:16:25.072880  234668 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1010 18:16:25.072892  234668 cache.go:58] Caching tarball of preloaded images
	I1010 18:16:25.072954  234668 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon
	I1010 18:16:25.072979  234668 preload.go:233] Found /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 18:16:25.072990  234668 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1010 18:16:25.073132  234668 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/kindnet-078032/config.json ...
	I1010 18:16:25.073165  234668 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/kindnet-078032/config.json: {Name:mk7911dfb19849dfbc3cc73ed5965dd25330365d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:16:25.094671  234668 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon, skipping pull
	I1010 18:16:25.094689  234668 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 exists in daemon, skipping load
	I1010 18:16:25.094704  234668 cache.go:232] Successfully downloaded all kic artifacts
	I1010 18:16:25.094725  234668 start.go:360] acquireMachinesLock for kindnet-078032: {Name:mk3bc1e8dcc66493934ce868b1b36a0b08ea0f91 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:16:25.094832  234668 start.go:364] duration metric: took 85.486µs to acquireMachinesLock for "kindnet-078032"
	I1010 18:16:25.094861  234668 start.go:93] Provisioning new machine with config: &{Name:kindnet-078032 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-078032 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:16:25.094945  234668 start.go:125] createHost starting for "" (driver="docker")
	I1010 18:16:24.580283  228555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:16:25.080156  228555 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:16:25.151088  228555 kubeadm.go:1113] duration metric: took 4.656056733s to wait for elevateKubeSystemPrivileges
	I1010 18:16:25.151123  228555 kubeadm.go:402] duration metric: took 15.763664329s to StartCluster
	I1010 18:16:25.151143  228555 settings.go:142] acquiring lock: {Name:mk32701f7c6313a55b8740f0862889585a36e8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:16:25.151226  228555 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:16:25.152856  228555 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:16:25.153768  228555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1010 18:16:25.153774  228555 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:16:25.153867  228555 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 18:16:25.153946  228555 addons.go:69] Setting storage-provisioner=true in profile "auto-078032"
	I1010 18:16:25.153960  228555 addons.go:238] Setting addon storage-provisioner=true in "auto-078032"
	I1010 18:16:25.153965  228555 config.go:182] Loaded profile config "auto-078032": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:16:25.153989  228555 host.go:66] Checking if "auto-078032" exists ...
	I1010 18:16:25.154018  228555 addons.go:69] Setting default-storageclass=true in profile "auto-078032"
	I1010 18:16:25.154036  228555 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-078032"
	I1010 18:16:25.155728  228555 cli_runner.go:164] Run: docker container inspect auto-078032 --format={{.State.Status}}
	I1010 18:16:25.156035  228555 cli_runner.go:164] Run: docker container inspect auto-078032 --format={{.State.Status}}
	I1010 18:16:25.156309  228555 out.go:179] * Verifying Kubernetes components...
	I1010 18:16:25.158605  228555 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:16:25.183733  228555 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:16:24.508003  232364 pod_ready.go:94] pod "kube-controller-manager-pause-950227" is "Ready"
	I1010 18:16:24.508029  232364 pod_ready.go:86] duration metric: took 364.479336ms for pod "kube-controller-manager-pause-950227" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:16:24.708294  232364 pod_ready.go:83] waiting for pod "kube-proxy-w8m7g" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:16:25.107803  232364 pod_ready.go:94] pod "kube-proxy-w8m7g" is "Ready"
	I1010 18:16:25.107828  232364 pod_ready.go:86] duration metric: took 399.511064ms for pod "kube-proxy-w8m7g" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:16:25.308535  232364 pod_ready.go:83] waiting for pod "kube-scheduler-pause-950227" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:16:25.707712  232364 pod_ready.go:94] pod "kube-scheduler-pause-950227" is "Ready"
	I1010 18:16:25.707739  232364 pod_ready.go:86] duration metric: took 399.176196ms for pod "kube-scheduler-pause-950227" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:16:25.707754  232364 pod_ready.go:40] duration metric: took 1.604556393s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 18:16:25.765792  232364 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1010 18:16:25.767678  232364 out.go:179] * Done! kubectl is now configured to use "pause-950227" cluster and "default" namespace by default
	I1010 18:16:25.184881  228555 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:16:25.184901  228555 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 18:16:25.184960  228555 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-078032
	I1010 18:16:25.185315  228555 addons.go:238] Setting addon default-storageclass=true in "auto-078032"
	I1010 18:16:25.185365  228555 host.go:66] Checking if "auto-078032" exists ...
	I1010 18:16:25.185846  228555 cli_runner.go:164] Run: docker container inspect auto-078032 --format={{.State.Status}}
	I1010 18:16:25.215841  228555 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/auto-078032/id_rsa Username:docker}
	I1010 18:16:25.219949  228555 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 18:16:25.220099  228555 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 18:16:25.220233  228555 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-078032
	I1010 18:16:25.241885  228555 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/auto-078032/id_rsa Username:docker}
	I1010 18:16:25.262352  228555 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1010 18:16:25.323684  228555 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:16:25.443198  228555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:16:25.448348  228555 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1010 18:16:25.449521  228555 node_ready.go:35] waiting up to 15m0s for node "auto-078032" to be "Ready" ...
	I1010 18:16:25.462145  228555 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 18:16:25.874306  228555 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1010 18:16:22.639526  211214 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1010 18:16:22.640029  211214 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1010 18:16:22.640146  211214 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 18:16:22.640220  211214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 18:16:22.672657  211214 cri.go:89] found id: "b4fa089486e03973c4c2741b3af93233cef2a97a2d050137fca4071a8bccec5b"
	I1010 18:16:22.672685  211214 cri.go:89] found id: ""
	I1010 18:16:22.672697  211214 logs.go:282] 1 containers: [b4fa089486e03973c4c2741b3af93233cef2a97a2d050137fca4071a8bccec5b]
	I1010 18:16:22.672761  211214 ssh_runner.go:195] Run: which crictl
	I1010 18:16:22.677545  211214 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 18:16:22.677608  211214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 18:16:22.710944  211214 cri.go:89] found id: ""
	I1010 18:16:22.710971  211214 logs.go:282] 0 containers: []
	W1010 18:16:22.710981  211214 logs.go:284] No container was found matching "etcd"
	I1010 18:16:22.710990  211214 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 18:16:22.711047  211214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 18:16:22.746189  211214 cri.go:89] found id: ""
	I1010 18:16:22.746212  211214 logs.go:282] 0 containers: []
	W1010 18:16:22.746218  211214 logs.go:284] No container was found matching "coredns"
	I1010 18:16:22.746225  211214 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 18:16:22.746282  211214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 18:16:22.778413  211214 cri.go:89] found id: "5afe55bc7d8aa7a1dc08648b839f114bcbeff6b16165dccaf57bb53c00a63eab"
	I1010 18:16:22.778465  211214 cri.go:89] found id: ""
	I1010 18:16:22.778475  211214 logs.go:282] 1 containers: [5afe55bc7d8aa7a1dc08648b839f114bcbeff6b16165dccaf57bb53c00a63eab]
	I1010 18:16:22.778571  211214 ssh_runner.go:195] Run: which crictl
	I1010 18:16:22.783230  211214 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 18:16:22.783309  211214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 18:16:22.816659  211214 cri.go:89] found id: ""
	I1010 18:16:22.816683  211214 logs.go:282] 0 containers: []
	W1010 18:16:22.816694  211214 logs.go:284] No container was found matching "kube-proxy"
	I1010 18:16:22.816701  211214 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 18:16:22.816761  211214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 18:16:22.857755  211214 cri.go:89] found id: "c6c0a05f5c63db90a4941c275b1cc9cb617ee1420fdaf9661d0f85037962c87c"
	I1010 18:16:22.857848  211214 cri.go:89] found id: ""
	I1010 18:16:22.857872  211214 logs.go:282] 1 containers: [c6c0a05f5c63db90a4941c275b1cc9cb617ee1420fdaf9661d0f85037962c87c]
	I1010 18:16:22.857953  211214 ssh_runner.go:195] Run: which crictl
	I1010 18:16:22.864603  211214 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 18:16:22.864674  211214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 18:16:22.902286  211214 cri.go:89] found id: ""
	I1010 18:16:22.902313  211214 logs.go:282] 0 containers: []
	W1010 18:16:22.902324  211214 logs.go:284] No container was found matching "kindnet"
	I1010 18:16:22.902330  211214 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1010 18:16:22.902391  211214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1010 18:16:22.933562  211214 cri.go:89] found id: ""
	I1010 18:16:22.933588  211214 logs.go:282] 0 containers: []
	W1010 18:16:22.933599  211214 logs.go:284] No container was found matching "storage-provisioner"
	I1010 18:16:22.933610  211214 logs.go:123] Gathering logs for kube-apiserver [b4fa089486e03973c4c2741b3af93233cef2a97a2d050137fca4071a8bccec5b] ...
	I1010 18:16:22.933624  211214 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b4fa089486e03973c4c2741b3af93233cef2a97a2d050137fca4071a8bccec5b"
	I1010 18:16:22.969657  211214 logs.go:123] Gathering logs for kube-scheduler [5afe55bc7d8aa7a1dc08648b839f114bcbeff6b16165dccaf57bb53c00a63eab] ...
	I1010 18:16:22.969680  211214 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5afe55bc7d8aa7a1dc08648b839f114bcbeff6b16165dccaf57bb53c00a63eab"
	I1010 18:16:23.020301  211214 logs.go:123] Gathering logs for kube-controller-manager [c6c0a05f5c63db90a4941c275b1cc9cb617ee1420fdaf9661d0f85037962c87c] ...
	I1010 18:16:23.020327  211214 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c6c0a05f5c63db90a4941c275b1cc9cb617ee1420fdaf9661d0f85037962c87c"
	I1010 18:16:23.048841  211214 logs.go:123] Gathering logs for CRI-O ...
	I1010 18:16:23.048880  211214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 18:16:23.095380  211214 logs.go:123] Gathering logs for container status ...
	I1010 18:16:23.095417  211214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 18:16:23.130278  211214 logs.go:123] Gathering logs for kubelet ...
	I1010 18:16:23.130307  211214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 18:16:23.207896  211214 logs.go:123] Gathering logs for dmesg ...
	I1010 18:16:23.207924  211214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 18:16:23.223767  211214 logs.go:123] Gathering logs for describe nodes ...
	I1010 18:16:23.223796  211214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 18:16:23.287372  211214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 18:16:25.788141  211214 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1010 18:16:25.788528  211214 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1010 18:16:25.788584  211214 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1010 18:16:25.788634  211214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1010 18:16:25.818705  211214 cri.go:89] found id: "b4fa089486e03973c4c2741b3af93233cef2a97a2d050137fca4071a8bccec5b"
	I1010 18:16:25.818724  211214 cri.go:89] found id: ""
	I1010 18:16:25.818731  211214 logs.go:282] 1 containers: [b4fa089486e03973c4c2741b3af93233cef2a97a2d050137fca4071a8bccec5b]
	I1010 18:16:25.818780  211214 ssh_runner.go:195] Run: which crictl
	I1010 18:16:25.822986  211214 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1010 18:16:25.823077  211214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1010 18:16:25.849674  211214 cri.go:89] found id: ""
	I1010 18:16:25.849700  211214 logs.go:282] 0 containers: []
	W1010 18:16:25.849710  211214 logs.go:284] No container was found matching "etcd"
	I1010 18:16:25.849717  211214 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1010 18:16:25.849772  211214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1010 18:16:25.886914  211214 cri.go:89] found id: ""
	I1010 18:16:25.886940  211214 logs.go:282] 0 containers: []
	W1010 18:16:25.886950  211214 logs.go:284] No container was found matching "coredns"
	I1010 18:16:25.886975  211214 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1010 18:16:25.887033  211214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1010 18:16:25.921185  211214 cri.go:89] found id: "5afe55bc7d8aa7a1dc08648b839f114bcbeff6b16165dccaf57bb53c00a63eab"
	I1010 18:16:25.921206  211214 cri.go:89] found id: ""
	I1010 18:16:25.921215  211214 logs.go:282] 1 containers: [5afe55bc7d8aa7a1dc08648b839f114bcbeff6b16165dccaf57bb53c00a63eab]
	I1010 18:16:25.921267  211214 ssh_runner.go:195] Run: which crictl
	I1010 18:16:25.926306  211214 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1010 18:16:25.926389  211214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1010 18:16:25.960249  211214 cri.go:89] found id: ""
	I1010 18:16:25.960270  211214 logs.go:282] 0 containers: []
	W1010 18:16:25.960277  211214 logs.go:284] No container was found matching "kube-proxy"
	I1010 18:16:25.960282  211214 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1010 18:16:25.960329  211214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1010 18:16:25.997999  211214 cri.go:89] found id: "c6c0a05f5c63db90a4941c275b1cc9cb617ee1420fdaf9661d0f85037962c87c"
	I1010 18:16:25.998024  211214 cri.go:89] found id: ""
	I1010 18:16:25.998035  211214 logs.go:282] 1 containers: [c6c0a05f5c63db90a4941c275b1cc9cb617ee1420fdaf9661d0f85037962c87c]
	I1010 18:16:25.998127  211214 ssh_runner.go:195] Run: which crictl
	I1010 18:16:26.004481  211214 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1010 18:16:26.004548  211214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1010 18:16:26.044464  211214 cri.go:89] found id: ""
	I1010 18:16:26.044494  211214 logs.go:282] 0 containers: []
	W1010 18:16:26.044504  211214 logs.go:284] No container was found matching "kindnet"
	I1010 18:16:26.044511  211214 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1010 18:16:26.044565  211214 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1010 18:16:26.077041  211214 cri.go:89] found id: ""
	I1010 18:16:26.077080  211214 logs.go:282] 0 containers: []
	W1010 18:16:26.077090  211214 logs.go:284] No container was found matching "storage-provisioner"
	I1010 18:16:26.077100  211214 logs.go:123] Gathering logs for container status ...
	I1010 18:16:26.077114  211214 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1010 18:16:26.114528  211214 logs.go:123] Gathering logs for kubelet ...
	I1010 18:16:26.114558  211214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1010 18:16:26.193104  211214 logs.go:123] Gathering logs for dmesg ...
	I1010 18:16:26.193142  211214 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1010 18:16:26.209973  211214 logs.go:123] Gathering logs for describe nodes ...
	I1010 18:16:26.209998  211214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1010 18:16:26.282309  211214 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1010 18:16:26.282328  211214 logs.go:123] Gathering logs for kube-apiserver [b4fa089486e03973c4c2741b3af93233cef2a97a2d050137fca4071a8bccec5b] ...
	I1010 18:16:26.282344  211214 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b4fa089486e03973c4c2741b3af93233cef2a97a2d050137fca4071a8bccec5b"
	I1010 18:16:26.318285  211214 logs.go:123] Gathering logs for kube-scheduler [5afe55bc7d8aa7a1dc08648b839f114bcbeff6b16165dccaf57bb53c00a63eab] ...
	I1010 18:16:26.318319  211214 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 5afe55bc7d8aa7a1dc08648b839f114bcbeff6b16165dccaf57bb53c00a63eab"
	I1010 18:16:26.374253  211214 logs.go:123] Gathering logs for kube-controller-manager [c6c0a05f5c63db90a4941c275b1cc9cb617ee1420fdaf9661d0f85037962c87c] ...
	I1010 18:16:26.374290  211214 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c6c0a05f5c63db90a4941c275b1cc9cb617ee1420fdaf9661d0f85037962c87c"
	I1010 18:16:26.406787  211214 logs.go:123] Gathering logs for CRI-O ...
	I1010 18:16:26.406819  211214 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1010 18:16:25.875612  228555 addons.go:514] duration metric: took 721.74373ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1010 18:16:25.954520  228555 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-078032" context rescaled to 1 replicas
	W1010 18:16:27.452569  228555 node_ready.go:57] node "auto-078032" has "Ready":"False" status (will retry)
	I1010 18:16:25.096843  234668 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1010 18:16:25.097103  234668 start.go:159] libmachine.API.Create for "kindnet-078032" (driver="docker")
	I1010 18:16:25.097136  234668 client.go:168] LocalClient.Create starting
	I1010 18:16:25.097206  234668 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem
	I1010 18:16:25.097252  234668 main.go:141] libmachine: Decoding PEM data...
	I1010 18:16:25.097280  234668 main.go:141] libmachine: Parsing certificate...
	I1010 18:16:25.097350  234668 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem
	I1010 18:16:25.097379  234668 main.go:141] libmachine: Decoding PEM data...
	I1010 18:16:25.097396  234668 main.go:141] libmachine: Parsing certificate...
	I1010 18:16:25.097791  234668 cli_runner.go:164] Run: docker network inspect kindnet-078032 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1010 18:16:25.117473  234668 cli_runner.go:211] docker network inspect kindnet-078032 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1010 18:16:25.117526  234668 network_create.go:284] running [docker network inspect kindnet-078032] to gather additional debugging logs...
	I1010 18:16:25.117542  234668 cli_runner.go:164] Run: docker network inspect kindnet-078032
	W1010 18:16:25.136338  234668 cli_runner.go:211] docker network inspect kindnet-078032 returned with exit code 1
	I1010 18:16:25.136378  234668 network_create.go:287] error running [docker network inspect kindnet-078032]: docker network inspect kindnet-078032: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-078032 not found
	I1010 18:16:25.136394  234668 network_create.go:289] output of [docker network inspect kindnet-078032]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-078032 not found
	
	** /stderr **
	I1010 18:16:25.136573  234668 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 18:16:25.157696  234668 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3f8fb0c8a54c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1a:51:a2:ab:ca:d6} reservation:<nil>}
	I1010 18:16:25.158732  234668 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bdbbffbd65c1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:de:11:33:77:48:20} reservation:<nil>}
	I1010 18:16:25.159771  234668 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0b6a5dab2001 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:93:a5:d3:c3:8f} reservation:<nil>}
	I1010 18:16:25.160881  234668 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-d333f79e11ec IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:16:0a:6c:a8:49:86} reservation:<nil>}
	I1010 18:16:25.161803  234668 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-b5fb6f0631f0 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:6a:99:b5:9d:d6:a1} reservation:<nil>}
	I1010 18:16:25.162662  234668 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-75c955ba1b2d IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:6a:18:a8:d4:de:b2} reservation:<nil>}
	I1010 18:16:25.164039  234668 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002074210}
	I1010 18:16:25.164128  234668 network_create.go:124] attempt to create docker network kindnet-078032 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1010 18:16:25.164217  234668 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-078032 kindnet-078032
	I1010 18:16:25.257528  234668 network_create.go:108] docker network kindnet-078032 192.168.103.0/24 created
	I1010 18:16:25.257561  234668 kic.go:121] calculated static IP "192.168.103.2" for the "kindnet-078032" container
	I1010 18:16:25.257649  234668 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1010 18:16:25.279512  234668 cli_runner.go:164] Run: docker volume create kindnet-078032 --label name.minikube.sigs.k8s.io=kindnet-078032 --label created_by.minikube.sigs.k8s.io=true
	I1010 18:16:25.300035  234668 oci.go:103] Successfully created a docker volume kindnet-078032
	I1010 18:16:25.300139  234668 cli_runner.go:164] Run: docker run --rm --name kindnet-078032-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-078032 --entrypoint /usr/bin/test -v kindnet-078032:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 -d /var/lib
	I1010 18:16:25.881549  234668 oci.go:107] Successfully prepared a docker volume kindnet-078032
	I1010 18:16:25.881631  234668 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:16:25.881661  234668 kic.go:194] Starting extracting preloaded images to volume ...
	I1010 18:16:25.881733  234668 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-078032:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.581509556Z" level=info msg="RDT not available in the host system"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.581524066Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.582406681Z" level=info msg="Conmon does support the --sync option"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.582426963Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.582439545Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.583421347Z" level=info msg="Conmon does support the --sync option"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.583444526Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.587847446Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.58788091Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.588643366Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = true\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/c
ni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/
var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.5892098Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.589272054Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.672006098Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-xnz7w Namespace:kube-system ID:47c2f6d046bf9b2f1dff8dbc1f2cd07b3c36d042e75c8eac5489c12ba3374f43 UID:6f50a9c5-d774-4c7e-b06e-e8d4224997f3 NetNS:/var/run/netns/11f1a069-d990-4f26-be9a-1a39e7344f6a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000280110}] Aliases:map[]}"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.672270188Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-xnz7w for CNI network kindnet (type=ptp)"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.672824955Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.672856526Z" level=info msg="Starting seccomp notifier watcher"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.672929401Z" level=info msg="Create NRI interface"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.673105292Z" level=info msg="built-in NRI default validator is disabled"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.67312458Z" level=info msg="runtime interface created"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.673139025Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.67314723Z" level=info msg="runtime interface starting up..."
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.673155302Z" level=info msg="starting plugins..."
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.673171932Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 10 18:16:22 pause-950227 crio[2188]: time="2025-10-10T18:16:22.673567133Z" level=info msg="No systemd watchdog enabled"
	Oct 10 18:16:22 pause-950227 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	24d3727a1caf7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   15 seconds ago      Running             coredns                   0                   47c2f6d046bf9       coredns-66bc5c9577-xnz7w               kube-system
	138558857f353       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   27 seconds ago      Running             kube-proxy                0                   0866eab019ace       kube-proxy-w8m7g                       kube-system
	920f58080664d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   27 seconds ago      Running             kindnet-cni               0                   530fc39af041e       kindnet-hltxf                          kube-system
	902554105d2e2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   37 seconds ago      Running             kube-controller-manager   0                   9bb76ced1a8b8       kube-controller-manager-pause-950227   kube-system
	9348a460aca77       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   37 seconds ago      Running             kube-apiserver            0                   4cff3366857d4       kube-apiserver-pause-950227            kube-system
	56bf019745e9d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   37 seconds ago      Running             kube-scheduler            0                   dd2cd6c4acf87       kube-scheduler-pause-950227            kube-system
	8c4a8297dbdf5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   37 seconds ago      Running             etcd                      0                   d5f4b3d7f4f4e       etcd-pause-950227                      kube-system
	
	
	==> coredns [24d3727a1caf7ea8454b0a63879e3ae97f0345d7c4584d8bfb3f63e99a305463] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37653 - 64676 "HINFO IN 1098477055154410395.5216215930617005721. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027506691s
	
	
	==> describe nodes <==
	Name:               pause-950227
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-950227
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad692bf4ab89f0e135b80e730ae25010479ecc46
	                    minikube.k8s.io/name=pause-950227
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_10T18_16_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 10 Oct 2025 18:15:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-950227
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 10 Oct 2025 18:16:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 10 Oct 2025 18:16:15 +0000   Fri, 10 Oct 2025 18:15:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 10 Oct 2025 18:16:15 +0000   Fri, 10 Oct 2025 18:15:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 10 Oct 2025 18:16:15 +0000   Fri, 10 Oct 2025 18:15:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 10 Oct 2025 18:16:15 +0000   Fri, 10 Oct 2025 18:16:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    pause-950227
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 6694834041ede3e9eb1b67e168e90e0c
	  System UUID:                b87f1f41-ac56-40e3-a62d-35e38f5dc50c
	  Boot ID:                    830c8438-99e6-48ba-b543-66e651cad0c8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-xnz7w                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-pause-950227                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-hltxf                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-pause-950227             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-pause-950227    200m (2%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-w8m7g                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-pause-950227             100m (1%)     0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 33s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  33s   kubelet          Node pause-950227 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s   kubelet          Node pause-950227 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s   kubelet          Node pause-950227 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s   node-controller  Node pause-950227 event: Registered Node pause-950227 in Controller
	  Normal  NodeReady                17s   kubelet          Node pause-950227 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.077121] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.021628] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +4.602398] kauditd_printk_skb: 47 callbacks suppressed
	[Oct10 17:33] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[  +1.057549] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[  +1.023904] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[  +1.023945] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[  +1.024888] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[  +1.022912] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[  +2.047862] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[  +4.031726] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[  +8.191358] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[ +16.382802] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[Oct10 17:34] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	
	
	==> etcd [8c4a8297dbdf5c9c125d0dca45d097f2180144b13b90ba4a003137b26d6d6b77] <==
	{"level":"info","ts":"2025-10-10T18:16:03.798744Z","caller":"traceutil/trace.go:172","msg":"trace[105001392] transaction","detail":"{read_only:false; response_revision:296; number_of_response:1; }","duration":"163.885598ms","start":"2025-10-10T18:16:03.634848Z","end":"2025-10-10T18:16:03.798734Z","steps":["trace[105001392] 'process raft request'  (duration: 163.621611ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-10T18:16:03.798602Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"164.210405ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/certificate-controller\" limit:1 ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2025-10-10T18:16:03.798926Z","caller":"traceutil/trace.go:172","msg":"trace[1804356506] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/certificate-controller; range_end:; response_count:1; response_revision:295; }","duration":"164.552464ms","start":"2025-10-10T18:16:03.634363Z","end":"2025-10-10T18:16:03.798915Z","steps":["trace[1804356506] 'agreement among raft nodes before linearized reading'  (duration: 164.12408ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-10T18:16:03.798722Z","caller":"traceutil/trace.go:172","msg":"trace[897797819] transaction","detail":"{read_only:false; response_revision:297; number_of_response:1; }","duration":"118.612546ms","start":"2025-10-10T18:16:03.680097Z","end":"2025-10-10T18:16:03.798709Z","steps":["trace[897797819] 'process raft request'  (duration: 118.475128ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-10T18:16:04.093305Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"163.311931ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/legacy-service-account-token-cleaner\" limit:1 ","response":"range_response_count:1 size:238"}
	{"level":"info","ts":"2025-10-10T18:16:04.093380Z","caller":"traceutil/trace.go:172","msg":"trace[2023904747] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/legacy-service-account-token-cleaner; range_end:; response_count:1; response_revision:297; }","duration":"163.403987ms","start":"2025-10-10T18:16:03.929959Z","end":"2025-10-10T18:16:04.093363Z","steps":["trace[2023904747] 'range keys from in-memory index tree'  (duration: 163.195271ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-10T18:16:04.093305Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.39552ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller\" limit:1 ","response":"range_response_count:1 size:234"}
	{"level":"info","ts":"2025-10-10T18:16:04.093480Z","caller":"traceutil/trace.go:172","msg":"trace[631194643] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller; range_end:; response_count:1; response_revision:297; }","duration":"113.576369ms","start":"2025-10-10T18:16:03.979887Z","end":"2025-10-10T18:16:04.093463Z","steps":["trace[631194643] 'range keys from in-memory index tree'  (duration: 113.273156ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-10T18:16:04.093305Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"214.266791ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/root-ca-cert-publisher\" limit:1 ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2025-10-10T18:16:04.093567Z","caller":"traceutil/trace.go:172","msg":"trace[2081601603] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/root-ca-cert-publisher; range_end:; response_count:1; response_revision:297; }","duration":"214.528523ms","start":"2025-10-10T18:16:03.879027Z","end":"2025-10-10T18:16:04.093556Z","steps":["trace[2081601603] 'range keys from in-memory index tree'  (duration: 214.157971ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-10T18:16:04.335332Z","caller":"traceutil/trace.go:172","msg":"trace[1098345010] linearizableReadLoop","detail":"{readStateIndex:309; appliedIndex:309; }","duration":"105.514482ms","start":"2025-10-10T18:16:04.229797Z","end":"2025-10-10T18:16:04.335311Z","steps":["trace[1098345010] 'read index received'  (duration: 105.505543ms)","trace[1098345010] 'applied index is now lower than readState.Index'  (duration: 7.613µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-10T18:16:04.486024Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"256.205684ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" limit:1 ","response":"range_response_count:1 size:203"}
	{"level":"warn","ts":"2025-10-10T18:16:04.486108Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"150.643756ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571765397046624546 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" mod_revision:0 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" value_size:3864 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-10-10T18:16:04.486117Z","caller":"traceutil/trace.go:172","msg":"trace[136455759] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpoint-controller; range_end:; response_count:1; response_revision:299; }","duration":"256.313135ms","start":"2025-10-10T18:16:04.229787Z","end":"2025-10-10T18:16:04.486100Z","steps":["trace[136455759] 'agreement among raft nodes before linearized reading'  (duration: 105.613472ms)","trace[136455759] 'range keys from in-memory index tree'  (duration: 150.485802ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-10T18:16:04.486263Z","caller":"traceutil/trace.go:172","msg":"trace[20112571] transaction","detail":"{read_only:false; response_revision:301; number_of_response:1; }","duration":"281.902741ms","start":"2025-10-10T18:16:04.204352Z","end":"2025-10-10T18:16:04.486255Z","steps":["trace[20112571] 'process raft request'  (duration: 281.826127ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-10T18:16:04.486315Z","caller":"traceutil/trace.go:172","msg":"trace[155861281] linearizableReadLoop","detail":"{readStateIndex:310; appliedIndex:309; }","duration":"150.912024ms","start":"2025-10-10T18:16:04.335386Z","end":"2025-10-10T18:16:04.486298Z","steps":["trace[155861281] 'read index received'  (duration: 1.000837ms)","trace[155861281] 'applied index is now lower than readState.Index'  (duration: 149.909615ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-10T18:16:04.486405Z","caller":"traceutil/trace.go:172","msg":"trace[1540939540] transaction","detail":"{read_only:false; response_revision:300; number_of_response:1; }","duration":"283.244136ms","start":"2025-10-10T18:16:04.203150Z","end":"2025-10-10T18:16:04.486395Z","steps":["trace[1540939540] 'process raft request'  (duration: 132.244458ms)","trace[1540939540] 'compare'  (duration: 150.515006ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-10T18:16:04.486470Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"217.896642ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-10T18:16:04.486500Z","caller":"traceutil/trace.go:172","msg":"trace[1097153433] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:301; }","duration":"217.932964ms","start":"2025-10-10T18:16:04.268560Z","end":"2025-10-10T18:16:04.486493Z","steps":["trace[1097153433] 'agreement among raft nodes before linearized reading'  (duration: 217.871064ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-10T18:16:04.486662Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"207.518263ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/ttl-controller\" limit:1 ","response":"range_response_count:1 size:193"}
	{"level":"info","ts":"2025-10-10T18:16:04.486691Z","caller":"traceutil/trace.go:172","msg":"trace[146418787] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/ttl-controller; range_end:; response_count:1; response_revision:301; }","duration":"207.552407ms","start":"2025-10-10T18:16:04.279131Z","end":"2025-10-10T18:16:04.486683Z","steps":["trace[146418787] 'agreement among raft nodes before linearized reading'  (duration: 207.453781ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-10T18:16:04.486706Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"157.594077ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/daemon-set-controller\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"warn","ts":"2025-10-10T18:16:04.486662Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"106.615128ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/service-account-controller\" limit:1 ","response":"range_response_count:1 size:218"}
	{"level":"info","ts":"2025-10-10T18:16:04.486747Z","caller":"traceutil/trace.go:172","msg":"trace[1164448351] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/daemon-set-controller; range_end:; response_count:1; response_revision:301; }","duration":"157.627543ms","start":"2025-10-10T18:16:04.329100Z","end":"2025-10-10T18:16:04.486728Z","steps":["trace[1164448351] 'agreement among raft nodes before linearized reading'  (duration: 157.539157ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-10T18:16:04.486766Z","caller":"traceutil/trace.go:172","msg":"trace[1006779037] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-account-controller; range_end:; response_count:1; response_revision:301; }","duration":"106.723357ms","start":"2025-10-10T18:16:04.380034Z","end":"2025-10-10T18:16:04.486758Z","steps":["trace[1006779037] 'agreement among raft nodes before linearized reading'  (duration: 106.544544ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:16:32 up 58 min,  0 user,  load average: 3.45, 2.73, 1.85
	Linux pause-950227 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [920f58080664d2c8c58c355a2253b0d4613573151a9472f5391f0855e086f433] <==
	I1010 18:16:05.135959       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1010 18:16:05.229573       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1010 18:16:05.229745       1 main.go:148] setting mtu 1500 for CNI 
	I1010 18:16:05.229763       1 main.go:178] kindnetd IP family: "ipv4"
	I1010 18:16:05.229789       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-10T18:16:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1010 18:16:05.434112       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1010 18:16:05.434221       1 controller.go:381] "Waiting for informer caches to sync"
	I1010 18:16:05.434241       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1010 18:16:05.434381       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1010 18:16:05.834700       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1010 18:16:05.834742       1 metrics.go:72] Registering metrics
	I1010 18:16:05.834876       1 controller.go:711] "Syncing nftables rules"
	I1010 18:16:15.436187       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1010 18:16:15.436261       1 main.go:301] handling current node
	I1010 18:16:25.441152       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1010 18:16:25.441192       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9348a460aca77374c1b39c27458b01cafa91171ea46cc13c27783556839e5407] <==
	I1010 18:15:56.572546       1 policy_source.go:240] refreshing policies
	E1010 18:15:56.616015       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1010 18:15:56.663687       1 controller.go:667] quota admission added evaluator for: namespaces
	I1010 18:15:56.669309       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1010 18:15:56.669325       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1010 18:15:56.675446       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1010 18:15:56.676297       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1010 18:15:56.752494       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1010 18:15:57.465839       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1010 18:15:57.469452       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1010 18:15:57.469472       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1010 18:15:57.903570       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1010 18:15:57.938914       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1010 18:15:58.071437       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1010 18:15:58.079924       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1010 18:15:58.081224       1 controller.go:667] quota admission added evaluator for: endpoints
	I1010 18:15:58.085776       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1010 18:15:58.482047       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1010 18:15:59.211657       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1010 18:15:59.221366       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1010 18:15:59.231081       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1010 18:16:04.197419       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1010 18:16:04.201773       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1010 18:16:04.202516       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1010 18:16:04.494355       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [902554105d2e25376fdf24065c080a718c0248dccbf615a635299f9a9f6aa896] <==
	I1010 18:16:03.643818       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1010 18:16:03.667123       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1010 18:16:03.676401       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1010 18:16:03.676501       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1010 18:16:03.676553       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1010 18:16:03.676560       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1010 18:16:03.676567       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1010 18:16:03.680775       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1010 18:16:03.682000       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1010 18:16:03.682137       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1010 18:16:03.682178       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1010 18:16:03.683306       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1010 18:16:03.683321       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1010 18:16:03.683332       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1010 18:16:03.683355       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1010 18:16:03.683358       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1010 18:16:03.683404       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1010 18:16:03.683428       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1010 18:16:03.683480       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1010 18:16:03.685964       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1010 18:16:03.688183       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1010 18:16:03.698402       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1010 18:16:03.708633       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1010 18:16:03.800564       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-950227" podCIDRs=["10.244.0.0/24"]
	I1010 18:16:18.634867       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [138558857f353346de22e898bff08dda71fb11cc165efe7dc8b3858d37a9fa30] <==
	I1010 18:16:05.019683       1 server_linux.go:53] "Using iptables proxy"
	I1010 18:16:05.093128       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1010 18:16:05.193840       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1010 18:16:05.193887       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1010 18:16:05.193997       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1010 18:16:05.219151       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1010 18:16:05.219319       1 server_linux.go:132] "Using iptables Proxier"
	I1010 18:16:05.227328       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1010 18:16:05.227864       1 server.go:527] "Version info" version="v1.34.1"
	I1010 18:16:05.228352       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 18:16:05.230203       1 config.go:200] "Starting service config controller"
	I1010 18:16:05.231152       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1010 18:16:05.230228       1 config.go:309] "Starting node config controller"
	I1010 18:16:05.231228       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1010 18:16:05.231239       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1010 18:16:05.230458       1 config.go:403] "Starting serviceCIDR config controller"
	I1010 18:16:05.231247       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1010 18:16:05.230441       1 config.go:106] "Starting endpoint slice config controller"
	I1010 18:16:05.231281       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1010 18:16:05.331722       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1010 18:16:05.331799       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1010 18:16:05.331849       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [56bf019745e9db0c23a0f3e53dc0302edd1929504a7cdaec4c606f9db01934cd] <==
	E1010 18:15:56.515207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1010 18:15:56.515271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1010 18:15:56.515331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1010 18:15:56.515342       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1010 18:15:56.515385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1010 18:15:56.515443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1010 18:15:56.515512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1010 18:15:56.515608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1010 18:15:56.515703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1010 18:15:56.515734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1010 18:15:56.515914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1010 18:15:56.515973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1010 18:15:56.515984       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1010 18:15:56.515986       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1010 18:15:56.516110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1010 18:15:56.516127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1010 18:15:57.348932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1010 18:15:57.436465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1010 18:15:57.503754       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1010 18:15:57.580298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1010 18:15:57.675597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1010 18:15:57.695675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1010 18:15:57.704618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1010 18:15:57.747159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1010 18:15:58.110853       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 10 18:16:00 pause-950227 kubelet[1333]: E1010 18:16:00.062692    1333 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-950227\" already exists" pod="kube-system/kube-scheduler-pause-950227"
	Oct 10 18:16:00 pause-950227 kubelet[1333]: I1010 18:16:00.080937    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-950227" podStartSLOduration=1.0809137739999999 podStartE2EDuration="1.080913774s" podCreationTimestamp="2025-10-10 18:15:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:16:00.080844289 +0000 UTC m=+1.132874987" watchObservedRunningTime="2025-10-10 18:16:00.080913774 +0000 UTC m=+1.132944472"
	Oct 10 18:16:00 pause-950227 kubelet[1333]: I1010 18:16:00.093282    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-950227" podStartSLOduration=1.093259203 podStartE2EDuration="1.093259203s" podCreationTimestamp="2025-10-10 18:15:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:16:00.092044572 +0000 UTC m=+1.144075308" watchObservedRunningTime="2025-10-10 18:16:00.093259203 +0000 UTC m=+1.145289943"
	Oct 10 18:16:00 pause-950227 kubelet[1333]: I1010 18:16:00.104366    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-950227" podStartSLOduration=1.10434997 podStartE2EDuration="1.10434997s" podCreationTimestamp="2025-10-10 18:15:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:16:00.104297546 +0000 UTC m=+1.156328231" watchObservedRunningTime="2025-10-10 18:16:00.10434997 +0000 UTC m=+1.156380670"
	Oct 10 18:16:00 pause-950227 kubelet[1333]: I1010 18:16:00.129484    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-950227" podStartSLOduration=2.129461424 podStartE2EDuration="2.129461424s" podCreationTimestamp="2025-10-10 18:15:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:16:00.116542355 +0000 UTC m=+1.168573052" watchObservedRunningTime="2025-10-10 18:16:00.129461424 +0000 UTC m=+1.181492122"
	Oct 10 18:16:03 pause-950227 kubelet[1333]: I1010 18:16:03.871556    1333 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 10 18:16:03 pause-950227 kubelet[1333]: I1010 18:16:03.872311    1333 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 10 18:16:04 pause-950227 kubelet[1333]: I1010 18:16:04.557154    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88c885b6-4002-42aa-a45a-5c6d2642d35e-xtables-lock\") pod \"kube-proxy-w8m7g\" (UID: \"88c885b6-4002-42aa-a45a-5c6d2642d35e\") " pod="kube-system/kube-proxy-w8m7g"
	Oct 10 18:16:04 pause-950227 kubelet[1333]: I1010 18:16:04.557223    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5cb1802a-3c55-4ae2-8fb9-4652ee01853a-lib-modules\") pod \"kindnet-hltxf\" (UID: \"5cb1802a-3c55-4ae2-8fb9-4652ee01853a\") " pod="kube-system/kindnet-hltxf"
	Oct 10 18:16:04 pause-950227 kubelet[1333]: I1010 18:16:04.557253    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46hb2\" (UniqueName: \"kubernetes.io/projected/5cb1802a-3c55-4ae2-8fb9-4652ee01853a-kube-api-access-46hb2\") pod \"kindnet-hltxf\" (UID: \"5cb1802a-3c55-4ae2-8fb9-4652ee01853a\") " pod="kube-system/kindnet-hltxf"
	Oct 10 18:16:04 pause-950227 kubelet[1333]: I1010 18:16:04.557287    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/88c885b6-4002-42aa-a45a-5c6d2642d35e-kube-proxy\") pod \"kube-proxy-w8m7g\" (UID: \"88c885b6-4002-42aa-a45a-5c6d2642d35e\") " pod="kube-system/kube-proxy-w8m7g"
	Oct 10 18:16:04 pause-950227 kubelet[1333]: I1010 18:16:04.557307    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5cb1802a-3c55-4ae2-8fb9-4652ee01853a-cni-cfg\") pod \"kindnet-hltxf\" (UID: \"5cb1802a-3c55-4ae2-8fb9-4652ee01853a\") " pod="kube-system/kindnet-hltxf"
	Oct 10 18:16:04 pause-950227 kubelet[1333]: I1010 18:16:04.557385    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88c885b6-4002-42aa-a45a-5c6d2642d35e-lib-modules\") pod \"kube-proxy-w8m7g\" (UID: \"88c885b6-4002-42aa-a45a-5c6d2642d35e\") " pod="kube-system/kube-proxy-w8m7g"
	Oct 10 18:16:04 pause-950227 kubelet[1333]: I1010 18:16:04.557426    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f98p\" (UniqueName: \"kubernetes.io/projected/88c885b6-4002-42aa-a45a-5c6d2642d35e-kube-api-access-4f98p\") pod \"kube-proxy-w8m7g\" (UID: \"88c885b6-4002-42aa-a45a-5c6d2642d35e\") " pod="kube-system/kube-proxy-w8m7g"
	Oct 10 18:16:04 pause-950227 kubelet[1333]: I1010 18:16:04.557873    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5cb1802a-3c55-4ae2-8fb9-4652ee01853a-xtables-lock\") pod \"kindnet-hltxf\" (UID: \"5cb1802a-3c55-4ae2-8fb9-4652ee01853a\") " pod="kube-system/kindnet-hltxf"
	Oct 10 18:16:05 pause-950227 kubelet[1333]: I1010 18:16:05.097984    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-w8m7g" podStartSLOduration=1.097932725 podStartE2EDuration="1.097932725s" podCreationTimestamp="2025-10-10 18:16:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:16:05.097897336 +0000 UTC m=+6.149928035" watchObservedRunningTime="2025-10-10 18:16:05.097932725 +0000 UTC m=+6.149963418"
	Oct 10 18:16:05 pause-950227 kubelet[1333]: I1010 18:16:05.098189    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-hltxf" podStartSLOduration=1.09817481 podStartE2EDuration="1.09817481s" podCreationTimestamp="2025-10-10 18:16:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:16:05.086061806 +0000 UTC m=+6.138092492" watchObservedRunningTime="2025-10-10 18:16:05.09817481 +0000 UTC m=+6.150205508"
	Oct 10 18:16:15 pause-950227 kubelet[1333]: I1010 18:16:15.826842    1333 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 10 18:16:15 pause-950227 kubelet[1333]: I1010 18:16:15.948311    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f50a9c5-d774-4c7e-b06e-e8d4224997f3-config-volume\") pod \"coredns-66bc5c9577-xnz7w\" (UID: \"6f50a9c5-d774-4c7e-b06e-e8d4224997f3\") " pod="kube-system/coredns-66bc5c9577-xnz7w"
	Oct 10 18:16:15 pause-950227 kubelet[1333]: I1010 18:16:15.948381    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5c4q\" (UniqueName: \"kubernetes.io/projected/6f50a9c5-d774-4c7e-b06e-e8d4224997f3-kube-api-access-j5c4q\") pod \"coredns-66bc5c9577-xnz7w\" (UID: \"6f50a9c5-d774-4c7e-b06e-e8d4224997f3\") " pod="kube-system/coredns-66bc5c9577-xnz7w"
	Oct 10 18:16:17 pause-950227 kubelet[1333]: I1010 18:16:17.114166    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-xnz7w" podStartSLOduration=13.114141216 podStartE2EDuration="13.114141216s" podCreationTimestamp="2025-10-10 18:16:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:16:17.113954779 +0000 UTC m=+18.165985478" watchObservedRunningTime="2025-10-10 18:16:17.114141216 +0000 UTC m=+18.166171914"
	Oct 10 18:16:26 pause-950227 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 10 18:16:26 pause-950227 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 10 18:16:26 pause-950227 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 10 18:16:26 pause-950227 systemd[1]: kubelet.service: Consumed 1.269s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-950227 -n pause-950227
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-950227 -n pause-950227: exit status 2 (313.920683ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-950227 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-141193 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-141193 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (237.435079ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:20:05Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-141193 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-141193 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-141193 describe deploy/metrics-server -n kube-system: exit status 1 (61.097262ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-141193 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-141193
helpers_test.go:243: (dbg) docker inspect old-k8s-version-141193:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "00949309f427ea7a77c95f92174ed346e22a737fad21a99c854c9a40990c276c",
	        "Created": "2025-10-10T18:19:07.516278103Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 286260,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-10T18:19:07.554856104Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:84da1fc78d37190122f56c520913b0bfc454516bc5fdbdc209e2a5258afce8c3",
	        "ResolvConfPath": "/var/lib/docker/containers/00949309f427ea7a77c95f92174ed346e22a737fad21a99c854c9a40990c276c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/00949309f427ea7a77c95f92174ed346e22a737fad21a99c854c9a40990c276c/hostname",
	        "HostsPath": "/var/lib/docker/containers/00949309f427ea7a77c95f92174ed346e22a737fad21a99c854c9a40990c276c/hosts",
	        "LogPath": "/var/lib/docker/containers/00949309f427ea7a77c95f92174ed346e22a737fad21a99c854c9a40990c276c/00949309f427ea7a77c95f92174ed346e22a737fad21a99c854c9a40990c276c-json.log",
	        "Name": "/old-k8s-version-141193",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-141193:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-141193",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "00949309f427ea7a77c95f92174ed346e22a737fad21a99c854c9a40990c276c",
	                "LowerDir": "/var/lib/docker/overlay2/8175d4c388af2a62328900c4de53ca564319b22d0194435beaabfec458b151c4-init/diff:/var/lib/docker/overlay2/9995a0af7efc4d83e8e62526a6cf13ffc5df3bab5cee59077c863040f7e3e58d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8175d4c388af2a62328900c4de53ca564319b22d0194435beaabfec458b151c4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8175d4c388af2a62328900c4de53ca564319b22d0194435beaabfec458b151c4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8175d4c388af2a62328900c4de53ca564319b22d0194435beaabfec458b151c4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-141193",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-141193/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-141193",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-141193",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-141193",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "170130ce2616fb4fd4dcd1366ec55106e25366365fa58e8e349a58dcc766e5cf",
	            "SandboxKey": "/var/run/docker/netns/170130ce2616",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-141193": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "76:0b:61:6d:8b:c1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7dff4078001ce0edf8fdd80b625c94d6d211c5682186b40a040629dae3a3adf3",
	                    "EndpointID": "a2e1c3fa7391d1468cab8840273085b0bcc91f5133d6b1de68feec8d7352c003",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-141193",
	                        "00949309f427"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-141193 -n old-k8s-version-141193
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-141193 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-141193 logs -n 25: (1.017109715s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                          │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p flannel-078032 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                 │ flannel-078032         │ jenkins │ v1.37.0 │ 10 Oct 25 18:19 UTC │ 10 Oct 25 18:19 UTC │
	│ ssh     │ -p flannel-078032 sudo cat /etc/kubernetes/kubelet.conf                                                                                                │ flannel-078032         │ jenkins │ v1.37.0 │ 10 Oct 25 18:19 UTC │ 10 Oct 25 18:19 UTC │
	│ ssh     │ -p flannel-078032 sudo cat /var/lib/kubelet/config.yaml                                                                                                │ flannel-078032         │ jenkins │ v1.37.0 │ 10 Oct 25 18:19 UTC │ 10 Oct 25 18:19 UTC │
	│ ssh     │ -p flannel-078032 sudo systemctl status docker --all --full --no-pager                                                                                 │ flannel-078032         │ jenkins │ v1.37.0 │ 10 Oct 25 18:19 UTC │                     │
	│ ssh     │ -p flannel-078032 sudo systemctl cat docker --no-pager                                                                                                 │ flannel-078032         │ jenkins │ v1.37.0 │ 10 Oct 25 18:19 UTC │ 10 Oct 25 18:19 UTC │
	│ ssh     │ -p flannel-078032 sudo cat /etc/docker/daemon.json                                                                                                     │ flannel-078032         │ jenkins │ v1.37.0 │ 10 Oct 25 18:19 UTC │                     │
	│ ssh     │ -p flannel-078032 sudo docker system info                                                                                                              │ flannel-078032         │ jenkins │ v1.37.0 │ 10 Oct 25 18:19 UTC │                     │
	│ ssh     │ -p flannel-078032 sudo systemctl status cri-docker --all --full --no-pager                                                                             │ flannel-078032         │ jenkins │ v1.37.0 │ 10 Oct 25 18:19 UTC │                     │
	│ ssh     │ -p flannel-078032 sudo systemctl cat cri-docker --no-pager                                                                                             │ flannel-078032         │ jenkins │ v1.37.0 │ 10 Oct 25 18:19 UTC │ 10 Oct 25 18:19 UTC │
	│ ssh     │ -p flannel-078032 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                        │ flannel-078032         │ jenkins │ v1.37.0 │ 10 Oct 25 18:19 UTC │                     │
	│ ssh     │ -p flannel-078032 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                  │ flannel-078032         │ jenkins │ v1.37.0 │ 10 Oct 25 18:19 UTC │ 10 Oct 25 18:19 UTC │
	│ ssh     │ -p flannel-078032 sudo cri-dockerd --version                                                                                                           │ flannel-078032         │ jenkins │ v1.37.0 │ 10 Oct 25 18:19 UTC │ 10 Oct 25 18:19 UTC │
	│ ssh     │ -p flannel-078032 sudo systemctl status containerd --all --full --no-pager                                                                             │ flannel-078032         │ jenkins │ v1.37.0 │ 10 Oct 25 18:19 UTC │                     │
	│ ssh     │ -p flannel-078032 sudo systemctl cat containerd --no-pager                                                                                             │ flannel-078032         │ jenkins │ v1.37.0 │ 10 Oct 25 18:19 UTC │ 10 Oct 25 18:19 UTC │
	│ ssh     │ -p flannel-078032 sudo cat /lib/systemd/system/containerd.service                                                                                      │ flannel-078032         │ jenkins │ v1.37.0 │ 10 Oct 25 18:19 UTC │ 10 Oct 25 18:19 UTC │
	│ ssh     │ -p flannel-078032 sudo cat /etc/containerd/config.toml                                                                                                 │ flannel-078032         │ jenkins │ v1.37.0 │ 10 Oct 25 18:19 UTC │ 10 Oct 25 18:19 UTC │
	│ ssh     │ -p flannel-078032 sudo containerd config dump                                                                                                          │ flannel-078032         │ jenkins │ v1.37.0 │ 10 Oct 25 18:19 UTC │ 10 Oct 25 18:19 UTC │
	│ ssh     │ -p flannel-078032 sudo systemctl status crio --all --full --no-pager                                                                                   │ flannel-078032         │ jenkins │ v1.37.0 │ 10 Oct 25 18:19 UTC │ 10 Oct 25 18:19 UTC │
	│ ssh     │ -p flannel-078032 sudo systemctl cat crio --no-pager                                                                                                   │ flannel-078032         │ jenkins │ v1.37.0 │ 10 Oct 25 18:19 UTC │ 10 Oct 25 18:19 UTC │
	│ ssh     │ -p flannel-078032 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                         │ flannel-078032         │ jenkins │ v1.37.0 │ 10 Oct 25 18:19 UTC │ 10 Oct 25 18:19 UTC │
	│ ssh     │ -p flannel-078032 sudo crio config                                                                                                                     │ flannel-078032         │ jenkins │ v1.37.0 │ 10 Oct 25 18:19 UTC │ 10 Oct 25 18:19 UTC │
	│ delete  │ -p flannel-078032                                                                                                                                      │ flannel-078032         │ jenkins │ v1.37.0 │ 10 Oct 25 18:19 UTC │ 10 Oct 25 18:19 UTC │
	│ start   │ -p embed-certs-472518 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ embed-certs-472518     │ jenkins │ v1.37.0 │ 10 Oct 25 18:19 UTC │                     │
	│ ssh     │ -p bridge-078032 pgrep -a kubelet                                                                                                                      │ bridge-078032          │ jenkins │ v1.37.0 │ 10 Oct 25 18:19 UTC │ 10 Oct 25 18:19 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-141193 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain           │ old-k8s-version-141193 │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/10 18:19:29
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 18:19:29.307380  297519 out.go:360] Setting OutFile to fd 1 ...
	I1010 18:19:29.307639  297519 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:19:29.307648  297519 out.go:374] Setting ErrFile to fd 2...
	I1010 18:19:29.307652  297519 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:19:29.307848  297519 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 18:19:29.308359  297519 out.go:368] Setting JSON to false
	I1010 18:19:29.309544  297519 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3709,"bootTime":1760116660,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 18:19:29.309628  297519 start.go:141] virtualization: kvm guest
	I1010 18:19:29.311539  297519 out.go:179] * [embed-certs-472518] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1010 18:19:29.312789  297519 out.go:179]   - MINIKUBE_LOCATION=21724
	I1010 18:19:29.312818  297519 notify.go:220] Checking for updates...
	I1010 18:19:29.315279  297519 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 18:19:29.316353  297519 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:19:29.317357  297519 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-5815/.minikube
	I1010 18:19:29.318296  297519 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 18:19:29.319275  297519 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 18:19:29.320678  297519 config.go:182] Loaded profile config "bridge-078032": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:19:29.320777  297519 config.go:182] Loaded profile config "no-preload-556024": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:19:29.320865  297519 config.go:182] Loaded profile config "old-k8s-version-141193": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1010 18:19:29.320961  297519 driver.go:421] Setting default libvirt URI to qemu:///system
	I1010 18:19:29.347326  297519 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1010 18:19:29.347447  297519 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:19:29.415299  297519 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:92 SystemTime:2025-10-10 18:19:29.403926888 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:19:29.415450  297519 docker.go:318] overlay module found
	I1010 18:19:29.417099  297519 out.go:179] * Using the docker driver based on user configuration
	I1010 18:19:29.418152  297519 start.go:305] selected driver: docker
	I1010 18:19:29.418168  297519 start.go:925] validating driver "docker" against <nil>
	I1010 18:19:29.418182  297519 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 18:19:29.418778  297519 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:19:29.480646  297519 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:92 SystemTime:2025-10-10 18:19:29.469973553 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:19:29.480836  297519 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1010 18:19:29.481123  297519 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:19:29.486487  297519 out.go:179] * Using Docker driver with root privileges
	I1010 18:19:29.487515  297519 cni.go:84] Creating CNI manager for ""
	I1010 18:19:29.487573  297519 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:19:29.487585  297519 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1010 18:19:29.487636  297519 start.go:349] cluster config:
	{Name:embed-certs-472518 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-472518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:19:29.488847  297519 out.go:179] * Starting "embed-certs-472518" primary control-plane node in "embed-certs-472518" cluster
	I1010 18:19:29.489954  297519 cache.go:123] Beginning downloading kic base image for docker with crio
	I1010 18:19:29.491026  297519 out.go:179] * Pulling base image v0.0.48-1760103811-21724 ...
	I1010 18:19:29.492039  297519 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:19:29.492088  297519 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon
	I1010 18:19:29.492107  297519 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1010 18:19:29.492115  297519 cache.go:58] Caching tarball of preloaded images
	I1010 18:19:29.492198  297519 preload.go:233] Found /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 18:19:29.492209  297519 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1010 18:19:29.492323  297519 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/config.json ...
	I1010 18:19:29.492345  297519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/config.json: {Name:mkf4940505c7ee133425c43eda360cf6e2c7ca37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:29.513009  297519 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon, skipping pull
	I1010 18:19:29.513028  297519 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 exists in daemon, skipping load
	I1010 18:19:29.513047  297519 cache.go:232] Successfully downloaded all kic artifacts
	I1010 18:19:29.513084  297519 start.go:360] acquireMachinesLock for embed-certs-472518: {Name:mk9cc494f12a6273567ade3e880d684508b52f40 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:19:29.513193  297519 start.go:364] duration metric: took 89.205µs to acquireMachinesLock for "embed-certs-472518"
	I1010 18:19:29.513217  297519 start.go:93] Provisioning new machine with config: &{Name:embed-certs-472518 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-472518 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:19:29.513310  297519 start.go:125] createHost starting for "" (driver="docker")
	I1010 18:19:26.202203  290755 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.408500221s)
	I1010 18:19:26.202229  290755 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1010 18:19:26.202248  290755 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1010 18:19:26.202265  290755 ssh_runner.go:235] Completed: which crictl: (1.408461538s)
	I1010 18:19:26.202302  290755 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1010 18:19:26.202319  290755 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:19:27.436400  290755 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.23407664s)
	I1010 18:19:27.436424  290755 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1010 18:19:27.436439  290755 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1010 18:19:27.436474  290755 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1010 18:19:27.436477  290755 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.234133965s)
	I1010 18:19:27.436545  290755 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:19:27.468040  290755 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:19:28.903594  290755 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.435515932s)
	I1010 18:19:28.903650  290755 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1010 18:19:28.903592  290755 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.467091411s)
	I1010 18:19:28.903722  290755 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1010 18:19:28.903743  290755 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1010 18:19:28.903758  290755 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1010 18:19:28.903790  290755 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1010 18:19:28.909081  290755 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1010 18:19:28.909119  290755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1010 18:19:30.434480  290755 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.530663151s)
	I1010 18:19:30.434509  290755 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1010 18:19:30.434544  290755 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1010 18:19:30.434604  290755 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1010 18:19:26.432444  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1010 18:19:27.227700  284725 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 18:19:27.227790  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:27.227806  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-141193 minikube.k8s.io/updated_at=2025_10_10T18_19_27_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ad692bf4ab89f0e135b80e730ae25010479ecc46 minikube.k8s.io/name=old-k8s-version-141193 minikube.k8s.io/primary=true
	I1010 18:19:27.241265  284725 ops.go:34] apiserver oom_adj: -16
	I1010 18:19:27.328182  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:27.829143  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:28.329274  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:28.829277  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:29.329094  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:29.828796  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:30.329234  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:30.829191  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:31.328940  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1010 18:19:29.390497  280647 pod_ready.go:104] pod "coredns-66bc5c9577-6pgp9" is not "Ready", error: <nil>
	W1010 18:19:31.891201  280647 pod_ready.go:104] pod "coredns-66bc5c9577-6pgp9" is not "Ready", error: <nil>
	I1010 18:19:29.515382  297519 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1010 18:19:29.515627  297519 start.go:159] libmachine.API.Create for "embed-certs-472518" (driver="docker")
	I1010 18:19:29.515666  297519 client.go:168] LocalClient.Create starting
	I1010 18:19:29.515737  297519 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem
	I1010 18:19:29.515777  297519 main.go:141] libmachine: Decoding PEM data...
	I1010 18:19:29.515803  297519 main.go:141] libmachine: Parsing certificate...
	I1010 18:19:29.515865  297519 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem
	I1010 18:19:29.515895  297519 main.go:141] libmachine: Decoding PEM data...
	I1010 18:19:29.515908  297519 main.go:141] libmachine: Parsing certificate...
	I1010 18:19:29.516365  297519 cli_runner.go:164] Run: docker network inspect embed-certs-472518 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1010 18:19:29.534588  297519 cli_runner.go:211] docker network inspect embed-certs-472518 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1010 18:19:29.534671  297519 network_create.go:284] running [docker network inspect embed-certs-472518] to gather additional debugging logs...
	I1010 18:19:29.534696  297519 cli_runner.go:164] Run: docker network inspect embed-certs-472518
	W1010 18:19:29.553547  297519 cli_runner.go:211] docker network inspect embed-certs-472518 returned with exit code 1
	I1010 18:19:29.553594  297519 network_create.go:287] error running [docker network inspect embed-certs-472518]: docker network inspect embed-certs-472518: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-472518 not found
	I1010 18:19:29.553614  297519 network_create.go:289] output of [docker network inspect embed-certs-472518]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-472518 not found
	
	** /stderr **
	I1010 18:19:29.553772  297519 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 18:19:29.572947  297519 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3f8fb0c8a54c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1a:51:a2:ab:ca:d6} reservation:<nil>}
	I1010 18:19:29.573938  297519 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bdbbffbd65c1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:de:11:33:77:48:20} reservation:<nil>}
	I1010 18:19:29.575016  297519 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0b6a5dab2001 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:93:a5:d3:c3:8f} reservation:<nil>}
	I1010 18:19:29.575896  297519 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-62177a68d9eb IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5e:70:f2:a2:da:00} reservation:<nil>}
	I1010 18:19:29.576771  297519 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-7dff4078001c IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:82:9e:3a:78:07:0b} reservation:<nil>}
	I1010 18:19:29.577819  297519 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fe1c30}
	I1010 18:19:29.577853  297519 network_create.go:124] attempt to create docker network embed-certs-472518 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1010 18:19:29.577909  297519 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-472518 embed-certs-472518
	I1010 18:19:29.638760  297519 network_create.go:108] docker network embed-certs-472518 192.168.94.0/24 created
	I1010 18:19:29.638795  297519 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-472518" container
	I1010 18:19:29.638864  297519 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1010 18:19:29.657821  297519 cli_runner.go:164] Run: docker volume create embed-certs-472518 --label name.minikube.sigs.k8s.io=embed-certs-472518 --label created_by.minikube.sigs.k8s.io=true
	I1010 18:19:29.680548  297519 oci.go:103] Successfully created a docker volume embed-certs-472518
	I1010 18:19:29.680634  297519 cli_runner.go:164] Run: docker run --rm --name embed-certs-472518-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-472518 --entrypoint /usr/bin/test -v embed-certs-472518:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 -d /var/lib
	I1010 18:19:30.653103  297519 oci.go:107] Successfully prepared a docker volume embed-certs-472518
	I1010 18:19:30.653168  297519 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:19:30.653194  297519 kic.go:194] Starting extracting preloaded images to volume ...
	I1010 18:19:30.653259  297519 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-472518:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1010 18:19:35.958583  290755 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (5.523955948s)
	I1010 18:19:35.958613  290755 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1010 18:19:35.958639  290755 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1010 18:19:35.958684  290755 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1010 18:19:31.829156  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:32.329031  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:32.829328  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:33.328835  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:33.829064  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:34.328594  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:34.828997  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:35.328857  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:35.829243  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:36.329252  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1010 18:19:33.922978  280647 pod_ready.go:104] pod "coredns-66bc5c9577-6pgp9" is not "Ready", error: <nil>
	W1010 18:19:36.389402  280647 pod_ready.go:104] pod "coredns-66bc5c9577-6pgp9" is not "Ready", error: <nil>
	I1010 18:19:36.829175  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:37.329152  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:37.828660  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:38.329035  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:38.829239  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:38.910444  284725 kubeadm.go:1113] duration metric: took 11.682717077s to wait for elevateKubeSystemPrivileges
	I1010 18:19:38.910486  284725 kubeadm.go:402] duration metric: took 22.874508869s to StartCluster
	I1010 18:19:38.910508  284725 settings.go:142] acquiring lock: {Name:mk32701f7c6313a55b8740f0862889585a36e8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:38.910586  284725 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:19:38.911936  284725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:38.912266  284725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1010 18:19:38.912275  284725 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:19:38.912349  284725 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 18:19:38.912448  284725 config.go:182] Loaded profile config "old-k8s-version-141193": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1010 18:19:38.912517  284725 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-141193"
	I1010 18:19:38.912540  284725 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-141193"
	I1010 18:19:38.912563  284725 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-141193"
	I1010 18:19:38.912587  284725 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-141193"
	I1010 18:19:38.912618  284725 host.go:66] Checking if "old-k8s-version-141193" exists ...
	I1010 18:19:38.912962  284725 cli_runner.go:164] Run: docker container inspect old-k8s-version-141193 --format={{.State.Status}}
	I1010 18:19:38.913195  284725 cli_runner.go:164] Run: docker container inspect old-k8s-version-141193 --format={{.State.Status}}
	I1010 18:19:38.914508  284725 out.go:179] * Verifying Kubernetes components...
	I1010 18:19:38.915692  284725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:19:38.940475  284725 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-141193"
	I1010 18:19:38.940524  284725 host.go:66] Checking if "old-k8s-version-141193" exists ...
	I1010 18:19:38.940990  284725 cli_runner.go:164] Run: docker container inspect old-k8s-version-141193 --format={{.State.Status}}
	I1010 18:19:38.941661  284725 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:19:36.235326  297519 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-472518:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 -I lz4 -xf /preloaded.tar -C /extractDir: (5.582014005s)
	I1010 18:19:36.235353  297519 kic.go:203] duration metric: took 5.582156324s to extract preloaded images to volume ...
	W1010 18:19:36.235438  297519 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1010 18:19:36.235466  297519 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1010 18:19:36.235508  297519 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1010 18:19:36.298744  297519 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-472518 --name embed-certs-472518 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-472518 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-472518 --network embed-certs-472518 --ip 192.168.94.2 --volume embed-certs-472518:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6
	I1010 18:19:36.660277  297519 cli_runner.go:164] Run: docker container inspect embed-certs-472518 --format={{.State.Running}}
	I1010 18:19:36.683109  297519 cli_runner.go:164] Run: docker container inspect embed-certs-472518 --format={{.State.Status}}
	I1010 18:19:36.706289  297519 cli_runner.go:164] Run: docker exec embed-certs-472518 stat /var/lib/dpkg/alternatives/iptables
	I1010 18:19:36.758639  297519 oci.go:144] the created container "embed-certs-472518" has a running status.
	I1010 18:19:36.758670  297519 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa...
	I1010 18:19:36.927753  297519 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1010 18:19:36.958499  297519 cli_runner.go:164] Run: docker container inspect embed-certs-472518 --format={{.State.Status}}
	I1010 18:19:36.989724  297519 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1010 18:19:36.989767  297519 kic_runner.go:114] Args: [docker exec --privileged embed-certs-472518 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1010 18:19:37.085145  297519 cli_runner.go:164] Run: docker container inspect embed-certs-472518 --format={{.State.Status}}
	I1010 18:19:37.103765  297519 machine.go:93] provisionDockerMachine start ...
	I1010 18:19:37.103877  297519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:19:37.121859  297519 main.go:141] libmachine: Using SSH client type: native
	I1010 18:19:37.122156  297519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1010 18:19:37.122185  297519 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 18:19:37.278934  297519 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-472518
	
	I1010 18:19:37.278974  297519 ubuntu.go:182] provisioning hostname "embed-certs-472518"
	I1010 18:19:37.279036  297519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:19:37.301787  297519 main.go:141] libmachine: Using SSH client type: native
	I1010 18:19:37.302122  297519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1010 18:19:37.302147  297519 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-472518 && echo "embed-certs-472518" | sudo tee /etc/hostname
	I1010 18:19:37.462953  297519 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-472518
	
	I1010 18:19:37.463092  297519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:19:37.483355  297519 main.go:141] libmachine: Using SSH client type: native
	I1010 18:19:37.483562  297519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1010 18:19:37.483581  297519 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-472518' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-472518/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-472518' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:19:37.619633  297519 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:19:37.619663  297519 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-5815/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-5815/.minikube}
	I1010 18:19:37.619700  297519 ubuntu.go:190] setting up certificates
	I1010 18:19:37.619721  297519 provision.go:84] configureAuth start
	I1010 18:19:37.619782  297519 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-472518
	I1010 18:19:37.639524  297519 provision.go:143] copyHostCerts
	I1010 18:19:37.639581  297519 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem, removing ...
	I1010 18:19:37.639590  297519 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem
	I1010 18:19:37.639653  297519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem (1082 bytes)
	I1010 18:19:37.639753  297519 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem, removing ...
	I1010 18:19:37.639762  297519 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem
	I1010 18:19:37.639792  297519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem (1123 bytes)
	I1010 18:19:37.639892  297519 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem, removing ...
	I1010 18:19:37.639904  297519 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem
	I1010 18:19:37.639944  297519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem (1675 bytes)
	I1010 18:19:37.640194  297519 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem org=jenkins.embed-certs-472518 san=[127.0.0.1 192.168.94.2 embed-certs-472518 localhost minikube]
	I1010 18:19:37.711804  297519 provision.go:177] copyRemoteCerts
	I1010 18:19:37.711857  297519 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:19:37.711895  297519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:19:37.732095  297519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:19:37.838992  297519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1010 18:19:37.866714  297519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1010 18:19:37.888945  297519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1010 18:19:37.918493  297519 provision.go:87] duration metric: took 298.757472ms to configureAuth
	I1010 18:19:37.918529  297519 ubuntu.go:206] setting minikube options for container-runtime
	I1010 18:19:37.918725  297519 config.go:182] Loaded profile config "embed-certs-472518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:19:37.918889  297519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:19:37.942356  297519 main.go:141] libmachine: Using SSH client type: native
	I1010 18:19:37.942602  297519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1010 18:19:37.942622  297519 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:19:38.254626  297519 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:19:38.254651  297519 machine.go:96] duration metric: took 1.150858414s to provisionDockerMachine
	I1010 18:19:38.254663  297519 client.go:171] duration metric: took 8.738987356s to LocalClient.Create
	I1010 18:19:38.254682  297519 start.go:167] duration metric: took 8.739055799s to libmachine.API.Create "embed-certs-472518"
	I1010 18:19:38.254691  297519 start.go:293] postStartSetup for "embed-certs-472518" (driver="docker")
	I1010 18:19:38.254708  297519 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:19:38.254780  297519 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:19:38.254843  297519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:19:38.274793  297519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:19:38.380997  297519 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:19:38.385778  297519 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1010 18:19:38.385812  297519 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1010 18:19:38.385824  297519 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/addons for local assets ...
	I1010 18:19:38.385897  297519 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/files for local assets ...
	I1010 18:19:38.386015  297519 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem -> 93542.pem in /etc/ssl/certs
	I1010 18:19:38.386329  297519 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:19:38.399687  297519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:19:38.433205  297519 start.go:296] duration metric: took 178.496265ms for postStartSetup
	I1010 18:19:38.433649  297519 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-472518
	I1010 18:19:38.457310  297519 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/config.json ...
	I1010 18:19:38.457685  297519 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1010 18:19:38.457744  297519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:19:38.481945  297519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:19:38.584022  297519 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1010 18:19:38.589083  297519 start.go:128] duration metric: took 9.075756126s to createHost
	I1010 18:19:38.589110  297519 start.go:83] releasing machines lock for "embed-certs-472518", held for 9.075905248s
	I1010 18:19:38.589174  297519 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-472518
	I1010 18:19:38.608767  297519 ssh_runner.go:195] Run: cat /version.json
	I1010 18:19:38.608827  297519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:19:38.608846  297519 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:19:38.608919  297519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:19:38.632034  297519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:19:38.632792  297519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:19:38.800917  297519 ssh_runner.go:195] Run: systemctl --version
	I1010 18:19:38.808600  297519 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:19:38.855293  297519 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:19:38.861342  297519 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:19:38.861410  297519 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:19:38.891858  297519 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 18:19:38.891887  297519 start.go:495] detecting cgroup driver to use...
	I1010 18:19:38.891918  297519 detect.go:190] detected "systemd" cgroup driver on host os
	I1010 18:19:38.891971  297519 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:19:38.913275  297519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:19:38.933211  297519 docker.go:218] disabling cri-docker service (if available) ...
	I1010 18:19:38.933272  297519 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:19:38.961814  297519 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:19:38.989581  297519 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:19:39.113116  297519 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:19:39.242841  297519 docker.go:234] disabling docker service ...
	I1010 18:19:39.242909  297519 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:19:39.271389  297519 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:19:39.292339  297519 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:19:38.942888  284725 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:19:38.942907  284725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 18:19:38.942960  284725 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-141193
	I1010 18:19:38.970721  284725 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/old-k8s-version-141193/id_rsa Username:docker}
	I1010 18:19:38.975861  284725 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 18:19:38.975942  284725 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 18:19:38.976121  284725 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-141193
	I1010 18:19:39.001036  284725 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/old-k8s-version-141193/id_rsa Username:docker}
	I1010 18:19:39.029164  284725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1010 18:19:39.079267  284725 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:19:39.182181  284725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 18:19:39.196892  284725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:19:39.325325  284725 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-141193" to be "Ready" ...
	I1010 18:19:39.325433  284725 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1010 18:19:39.645778  284725 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1010 18:19:39.426959  297519 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:19:39.529895  297519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:19:39.544847  297519 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:19:39.562962  297519 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1010 18:19:39.563028  297519 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:19:39.579129  297519 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1010 18:19:39.579188  297519 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:19:39.590948  297519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:19:39.603114  297519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:19:39.616530  297519 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:19:39.628641  297519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:19:39.640699  297519 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:19:39.658670  297519 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:19:39.668732  297519 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:19:39.677245  297519 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:19:39.685122  297519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:19:39.768678  297519 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:19:40.416735  297519 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:19:40.416808  297519 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:19:40.420997  297519 start.go:563] Will wait 60s for crictl version
	I1010 18:19:40.421064  297519 ssh_runner.go:195] Run: which crictl
	I1010 18:19:40.424835  297519 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1010 18:19:40.451116  297519 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1010 18:19:40.451193  297519 ssh_runner.go:195] Run: crio --version
	I1010 18:19:40.478720  297519 ssh_runner.go:195] Run: crio --version
	I1010 18:19:40.508073  297519 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1010 18:19:36.573316  290755 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1010 18:19:36.573357  290755 cache_images.go:124] Successfully loaded all cached images
	I1010 18:19:36.573362  290755 cache_images.go:93] duration metric: took 14.187620191s to LoadCachedImages
	I1010 18:19:36.573372  290755 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1010 18:19:36.573462  290755 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-556024 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-556024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:19:36.573521  290755 ssh_runner.go:195] Run: crio config
	I1010 18:19:36.621515  290755 cni.go:84] Creating CNI manager for ""
	I1010 18:19:36.621547  290755 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:19:36.621568  290755 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1010 18:19:36.621599  290755 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-556024 NodeName:no-preload-556024 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 18:19:36.621768  290755 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-556024"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 18:19:36.621843  290755 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1010 18:19:36.631706  290755 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1010 18:19:36.631757  290755 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1010 18:19:36.641973  290755 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1010 18:19:36.642034  290755 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21724-5815/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1010 18:19:36.642086  290755 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21724-5815/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1010 18:19:36.642108  290755 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1010 18:19:36.646990  290755 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1010 18:19:36.647016  290755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1010 18:19:38.052385  290755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:19:38.067302  290755 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1010 18:19:38.071520  290755 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1010 18:19:38.071546  290755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1010 18:19:38.313164  290755 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1010 18:19:38.318677  290755 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1010 18:19:38.318705  290755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1010 18:19:38.507363  290755 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 18:19:38.517962  290755 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1010 18:19:38.533867  290755 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:19:38.552255  290755 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1010 18:19:38.567480  290755 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1010 18:19:38.571905  290755 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:19:38.583661  290755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:19:38.680114  290755 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:19:38.707986  290755 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024 for IP: 192.168.76.2
	I1010 18:19:38.708008  290755 certs.go:195] generating shared ca certs ...
	I1010 18:19:38.708035  290755 certs.go:227] acquiring lock for ca certs: {Name:mkd2ebf34e0d6ec3a7809bed8325fdc7fe2fcc31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:38.708231  290755 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key
	I1010 18:19:38.708290  290755 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key
	I1010 18:19:38.708300  290755 certs.go:257] generating profile certs ...
	I1010 18:19:38.708367  290755 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/client.key
	I1010 18:19:38.708380  290755 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/client.crt with IP's: []
	I1010 18:19:38.995610  290755 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/client.crt ...
	I1010 18:19:38.995641  290755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/client.crt: {Name:mk8d9b4af8bddce1ee92933f77d78e6f9633cf59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:38.995827  290755 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/client.key ...
	I1010 18:19:38.995849  290755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/client.key: {Name:mkc826ef11a17b59b6dfeb7d86cbbfc96e59b639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:38.995960  290755 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/apiserver.key.b1bc56db
	I1010 18:19:38.995983  290755 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/apiserver.crt.b1bc56db with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1010 18:19:39.012404  290755 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/apiserver.crt.b1bc56db ...
	I1010 18:19:39.012435  290755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/apiserver.crt.b1bc56db: {Name:mk59e852199090b6eb5e2b3ca08754e93a3483bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:39.013257  290755 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/apiserver.key.b1bc56db ...
	I1010 18:19:39.013287  290755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/apiserver.key.b1bc56db: {Name:mk8d27a8b014996e0751bb5e6f7809aba94d859f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:39.013430  290755 certs.go:382] copying /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/apiserver.crt.b1bc56db -> /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/apiserver.crt
	I1010 18:19:39.013537  290755 certs.go:386] copying /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/apiserver.key.b1bc56db -> /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/apiserver.key
	I1010 18:19:39.013642  290755 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/proxy-client.key
	I1010 18:19:39.013671  290755 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/proxy-client.crt with IP's: []
	I1010 18:19:39.220834  290755 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/proxy-client.crt ...
	I1010 18:19:39.220862  290755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/proxy-client.crt: {Name:mk9f0d43bcac37a4c843d2cb582f0c2adfc93eae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:39.221038  290755 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/proxy-client.key ...
	I1010 18:19:39.221071  290755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/proxy-client.key: {Name:mkfb662165f70ef0a56cb9b08c738bf2739ae8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:39.221347  290755 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem (1338 bytes)
	W1010 18:19:39.221387  290755 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354_empty.pem, impossibly tiny 0 bytes
	I1010 18:19:39.221397  290755 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem (1675 bytes)
	I1010 18:19:39.221426  290755 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem (1082 bytes)
	I1010 18:19:39.221466  290755 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:19:39.221497  290755 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem (1675 bytes)
	I1010 18:19:39.221564  290755 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:19:39.223673  290755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:19:39.259242  290755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:19:39.289605  290755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:19:39.319469  290755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1010 18:19:39.359403  290755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1010 18:19:39.389160  290755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 18:19:39.421558  290755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:19:39.449548  290755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1010 18:19:39.481154  290755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /usr/share/ca-certificates/93542.pem (1708 bytes)
	I1010 18:19:39.504128  290755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:19:39.527111  290755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem --> /usr/share/ca-certificates/9354.pem (1338 bytes)
	I1010 18:19:39.550221  290755 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 18:19:39.567974  290755 ssh_runner.go:195] Run: openssl version
	I1010 18:19:39.577149  290755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93542.pem && ln -fs /usr/share/ca-certificates/93542.pem /etc/ssl/certs/93542.pem"
	I1010 18:19:39.590131  290755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93542.pem
	I1010 18:19:39.596266  290755 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 17:36 /usr/share/ca-certificates/93542.pem
	I1010 18:19:39.596353  290755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93542.pem
	I1010 18:19:39.645929  290755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93542.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:19:39.656751  290755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:19:39.667911  290755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:19:39.672568  290755 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:19:39.672629  290755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:19:39.709916  290755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:19:39.723761  290755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9354.pem && ln -fs /usr/share/ca-certificates/9354.pem /etc/ssl/certs/9354.pem"
	I1010 18:19:39.734227  290755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9354.pem
	I1010 18:19:39.738430  290755 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 17:36 /usr/share/ca-certificates/9354.pem
	I1010 18:19:39.738480  290755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9354.pem
	I1010 18:19:39.774724  290755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9354.pem /etc/ssl/certs/51391683.0"
	I1010 18:19:39.785967  290755 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:19:39.790333  290755 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1010 18:19:39.790380  290755 kubeadm.go:400] StartCluster: {Name:no-preload-556024 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-556024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:19:39.790446  290755 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 18:19:39.790501  290755 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 18:19:39.820090  290755 cri.go:89] found id: ""
	I1010 18:19:39.820169  290755 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 18:19:39.830518  290755 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 18:19:39.839615  290755 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1010 18:19:39.839662  290755 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 18:19:39.848419  290755 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 18:19:39.848434  290755 kubeadm.go:157] found existing configuration files:
	
	I1010 18:19:39.848467  290755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 18:19:39.857342  290755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 18:19:39.857392  290755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 18:19:39.866148  290755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 18:19:39.874901  290755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 18:19:39.874944  290755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 18:19:39.883262  290755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 18:19:39.893264  290755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 18:19:39.893314  290755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 18:19:39.901999  290755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 18:19:39.910570  290755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 18:19:39.910625  290755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 18:19:39.919046  290755 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1010 18:19:39.976294  290755 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1010 18:19:40.036602  290755 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 18:19:39.646755  284725 addons.go:514] duration metric: took 734.392931ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1010 18:19:39.829842  284725 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-141193" context rescaled to 1 replicas
	W1010 18:19:41.329751  284725 node_ready.go:57] node "old-k8s-version-141193" has "Ready":"False" status (will retry)
	W1010 18:19:38.390151  280647 pod_ready.go:104] pod "coredns-66bc5c9577-6pgp9" is not "Ready", error: <nil>
	W1010 18:19:40.890568  280647 pod_ready.go:104] pod "coredns-66bc5c9577-6pgp9" is not "Ready", error: <nil>
	I1010 18:19:40.509087  297519 cli_runner.go:164] Run: docker network inspect embed-certs-472518 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 18:19:40.527846  297519 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1010 18:19:40.532038  297519 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:19:40.543233  297519 kubeadm.go:883] updating cluster {Name:embed-certs-472518 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-472518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 18:19:40.543355  297519 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:19:40.543406  297519 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:19:40.576079  297519 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:19:40.576103  297519 crio.go:433] Images already preloaded, skipping extraction
	I1010 18:19:40.576149  297519 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:19:40.603286  297519 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:19:40.603307  297519 cache_images.go:85] Images are preloaded, skipping loading
	I1010 18:19:40.603316  297519 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1010 18:19:40.603416  297519 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-472518 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-472518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:19:40.603489  297519 ssh_runner.go:195] Run: crio config
	I1010 18:19:40.650771  297519 cni.go:84] Creating CNI manager for ""
	I1010 18:19:40.650795  297519 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:19:40.650818  297519 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1010 18:19:40.650846  297519 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-472518 NodeName:embed-certs-472518 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 18:19:40.650994  297519 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-472518"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 18:19:40.651097  297519 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1010 18:19:40.660819  297519 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 18:19:40.660881  297519 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 18:19:40.669550  297519 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1010 18:19:40.685131  297519 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:19:40.700924  297519 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1010 18:19:40.714711  297519 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1010 18:19:40.718537  297519 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:19:40.730221  297519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:19:40.813343  297519 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:19:40.842358  297519 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518 for IP: 192.168.94.2
	I1010 18:19:40.842384  297519 certs.go:195] generating shared ca certs ...
	I1010 18:19:40.842414  297519 certs.go:227] acquiring lock for ca certs: {Name:mkd2ebf34e0d6ec3a7809bed8325fdc7fe2fcc31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:40.842575  297519 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key
	I1010 18:19:40.842641  297519 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key
	I1010 18:19:40.842652  297519 certs.go:257] generating profile certs ...
	I1010 18:19:40.842727  297519 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/client.key
	I1010 18:19:40.842755  297519 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/client.crt with IP's: []
	I1010 18:19:41.140872  297519 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/client.crt ...
	I1010 18:19:41.140951  297519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/client.crt: {Name:mk90ff9ee7c79c588a4bba8e2b2913e9b2856169 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:41.141174  297519 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/client.key ...
	I1010 18:19:41.141222  297519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/client.key: {Name:mk5f78037f64b29cdbc4aed24a925c0104c67521 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:41.141357  297519 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/apiserver.key.37abe28c
	I1010 18:19:41.141374  297519 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/apiserver.crt.37abe28c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1010 18:19:42.118734  297519 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/apiserver.crt.37abe28c ...
	I1010 18:19:42.118766  297519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/apiserver.crt.37abe28c: {Name:mk6adb94e12ef4d6ec0b143de7d4e7b3b5f49cfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:42.119006  297519 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/apiserver.key.37abe28c ...
	I1010 18:19:42.119029  297519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/apiserver.key.37abe28c: {Name:mk253a2f9e9b37a69fe2c704ee927519b2f475b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:42.119182  297519 certs.go:382] copying /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/apiserver.crt.37abe28c -> /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/apiserver.crt
	I1010 18:19:42.119295  297519 certs.go:386] copying /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/apiserver.key.37abe28c -> /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/apiserver.key
	I1010 18:19:42.119362  297519 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/proxy-client.key
	I1010 18:19:42.119379  297519 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/proxy-client.crt with IP's: []
	I1010 18:19:42.263357  297519 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/proxy-client.crt ...
	I1010 18:19:42.263382  297519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/proxy-client.crt: {Name:mk3ab2a5390977dc0ccd0e8ceb1ea219bfba11ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:42.263564  297519 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/proxy-client.key ...
	I1010 18:19:42.263581  297519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/proxy-client.key: {Name:mk52d2b0e3abb2f7b91c72f29faca7726f4e4d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:42.263785  297519 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem (1338 bytes)
	W1010 18:19:42.263821  297519 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354_empty.pem, impossibly tiny 0 bytes
	I1010 18:19:42.263832  297519 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem (1675 bytes)
	I1010 18:19:42.263852  297519 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem (1082 bytes)
	I1010 18:19:42.263877  297519 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:19:42.263899  297519 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem (1675 bytes)
	I1010 18:19:42.263936  297519 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:19:42.264523  297519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:19:42.285852  297519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:19:42.306370  297519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:19:42.328942  297519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1010 18:19:42.351952  297519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1010 18:19:42.372897  297519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1010 18:19:42.393127  297519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:19:42.415877  297519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1010 18:19:42.437547  297519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /usr/share/ca-certificates/93542.pem (1708 bytes)
	I1010 18:19:42.459534  297519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:19:42.478927  297519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem --> /usr/share/ca-certificates/9354.pem (1338 bytes)
	I1010 18:19:42.497810  297519 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 18:19:42.511562  297519 ssh_runner.go:195] Run: openssl version
	I1010 18:19:42.517561  297519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93542.pem && ln -fs /usr/share/ca-certificates/93542.pem /etc/ssl/certs/93542.pem"
	I1010 18:19:42.526934  297519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93542.pem
	I1010 18:19:42.530997  297519 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 17:36 /usr/share/ca-certificates/93542.pem
	I1010 18:19:42.531059  297519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93542.pem
	I1010 18:19:42.565782  297519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93542.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:19:42.575313  297519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:19:42.585507  297519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:19:42.589373  297519 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:19:42.589430  297519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:19:42.633495  297519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:19:42.644465  297519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9354.pem && ln -fs /usr/share/ca-certificates/9354.pem /etc/ssl/certs/9354.pem"
	I1010 18:19:42.654370  297519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9354.pem
	I1010 18:19:42.658277  297519 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 17:36 /usr/share/ca-certificates/9354.pem
	I1010 18:19:42.658319  297519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9354.pem
	I1010 18:19:42.694959  297519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9354.pem /etc/ssl/certs/51391683.0"
	I1010 18:19:42.705479  297519 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:19:42.709489  297519 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1010 18:19:42.709551  297519 kubeadm.go:400] StartCluster: {Name:embed-certs-472518 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-472518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:19:42.709628  297519 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 18:19:42.709665  297519 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 18:19:42.737960  297519 cri.go:89] found id: ""
	I1010 18:19:42.738023  297519 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 18:19:42.748401  297519 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 18:19:42.757553  297519 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1010 18:19:42.757594  297519 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 18:19:42.766074  297519 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 18:19:42.766092  297519 kubeadm.go:157] found existing configuration files:
	
	I1010 18:19:42.766131  297519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 18:19:42.774509  297519 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 18:19:42.774558  297519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 18:19:42.782612  297519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 18:19:42.790566  297519 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 18:19:42.790603  297519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 18:19:42.798312  297519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 18:19:42.806575  297519 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 18:19:42.806624  297519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 18:19:42.814470  297519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 18:19:42.822367  297519 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 18:19:42.822416  297519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 18:19:42.830820  297519 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1010 18:19:42.891919  297519 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1010 18:19:42.951751  297519 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1010 18:19:43.829592  284725 node_ready.go:57] node "old-k8s-version-141193" has "Ready":"False" status (will retry)
	W1010 18:19:46.328989  284725 node_ready.go:57] node "old-k8s-version-141193" has "Ready":"False" status (will retry)
	W1010 18:19:43.387922  280647 pod_ready.go:104] pod "coredns-66bc5c9577-6pgp9" is not "Ready", error: <nil>
	W1010 18:19:45.388011  280647 pod_ready.go:104] pod "coredns-66bc5c9577-6pgp9" is not "Ready", error: <nil>
	I1010 18:19:51.099014  290755 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1010 18:19:51.099141  290755 kubeadm.go:318] [preflight] Running pre-flight checks
	I1010 18:19:51.099232  290755 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1010 18:19:51.099328  290755 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1010 18:19:51.099389  290755 kubeadm.go:318] OS: Linux
	I1010 18:19:51.099459  290755 kubeadm.go:318] CGROUPS_CPU: enabled
	I1010 18:19:51.099534  290755 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1010 18:19:51.099604  290755 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1010 18:19:51.099686  290755 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1010 18:19:51.099811  290755 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1010 18:19:51.099896  290755 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1010 18:19:51.099963  290755 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1010 18:19:51.100023  290755 kubeadm.go:318] CGROUPS_IO: enabled
	I1010 18:19:51.100141  290755 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 18:19:51.100276  290755 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 18:19:51.100395  290755 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1010 18:19:51.100489  290755 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 18:19:51.101663  290755 out.go:252]   - Generating certificates and keys ...
	I1010 18:19:51.101768  290755 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1010 18:19:51.101889  290755 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1010 18:19:51.101990  290755 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1010 18:19:51.102102  290755 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1010 18:19:51.102192  290755 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1010 18:19:51.102270  290755 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1010 18:19:51.102353  290755 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1010 18:19:51.102539  290755 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-556024] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1010 18:19:51.102618  290755 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1010 18:19:51.102814  290755 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-556024] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1010 18:19:51.102922  290755 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1010 18:19:51.103010  290755 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1010 18:19:51.103097  290755 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1010 18:19:51.103180  290755 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 18:19:51.103260  290755 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 18:19:51.103353  290755 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1010 18:19:51.103444  290755 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 18:19:51.103516  290755 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 18:19:51.103597  290755 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 18:19:51.103725  290755 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 18:19:51.103826  290755 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 18:19:51.105120  290755 out.go:252]   - Booting up control plane ...
	I1010 18:19:51.105199  290755 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 18:19:51.105269  290755 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 18:19:51.105338  290755 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 18:19:51.105474  290755 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 18:19:51.105594  290755 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1010 18:19:51.105735  290755 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1010 18:19:51.105819  290755 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 18:19:51.105856  290755 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1010 18:19:51.105978  290755 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 18:19:51.106149  290755 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 18:19:51.106238  290755 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.765013ms
	I1010 18:19:51.106352  290755 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1010 18:19:51.106475  290755 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1010 18:19:51.106586  290755 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1010 18:19:51.106697  290755 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1010 18:19:51.106820  290755 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.505383881s
	I1010 18:19:51.106911  290755 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.020476677s
	I1010 18:19:51.107023  290755 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.501182758s
	I1010 18:19:51.107191  290755 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 18:19:51.107331  290755 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 18:19:51.107416  290755 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 18:19:51.107662  290755 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-556024 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 18:19:51.107740  290755 kubeadm.go:318] [bootstrap-token] Using token: 1dnpw3.2s7ope8v05qlu05n
	I1010 18:19:51.109746  290755 out.go:252]   - Configuring RBAC rules ...
	I1010 18:19:51.109856  290755 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 18:19:51.109967  290755 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 18:19:51.110120  290755 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 18:19:51.110278  290755 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 18:19:51.110442  290755 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 18:19:51.110571  290755 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 18:19:51.110748  290755 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 18:19:51.110818  290755 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1010 18:19:51.110863  290755 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1010 18:19:51.110869  290755 kubeadm.go:318] 
	I1010 18:19:51.110927  290755 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1010 18:19:51.110933  290755 kubeadm.go:318] 
	I1010 18:19:51.111004  290755 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1010 18:19:51.111010  290755 kubeadm.go:318] 
	I1010 18:19:51.111031  290755 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1010 18:19:51.111126  290755 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 18:19:51.111198  290755 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 18:19:51.111206  290755 kubeadm.go:318] 
	I1010 18:19:51.111268  290755 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1010 18:19:51.111277  290755 kubeadm.go:318] 
	I1010 18:19:51.111355  290755 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 18:19:51.111368  290755 kubeadm.go:318] 
	I1010 18:19:51.111439  290755 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1010 18:19:51.111550  290755 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 18:19:51.111651  290755 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 18:19:51.111661  290755 kubeadm.go:318] 
	I1010 18:19:51.111789  290755 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 18:19:51.111876  290755 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1010 18:19:51.111894  290755 kubeadm.go:318] 
	I1010 18:19:51.112017  290755 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 1dnpw3.2s7ope8v05qlu05n \
	I1010 18:19:51.112177  290755 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:08dcb68c3233bd2646103f50182dc3a0cc6156f6b69cb66c341f613324bcc71f \
	I1010 18:19:51.112215  290755 kubeadm.go:318] 	--control-plane 
	I1010 18:19:51.112221  290755 kubeadm.go:318] 
	I1010 18:19:51.112349  290755 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1010 18:19:51.112363  290755 kubeadm.go:318] 
	I1010 18:19:51.112501  290755 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 1dnpw3.2s7ope8v05qlu05n \
	I1010 18:19:51.112673  290755 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:08dcb68c3233bd2646103f50182dc3a0cc6156f6b69cb66c341f613324bcc71f 
	I1010 18:19:51.112700  290755 cni.go:84] Creating CNI manager for ""
	I1010 18:19:51.112707  290755 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:19:51.115481  290755 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1010 18:19:48.329324  284725 node_ready.go:57] node "old-k8s-version-141193" has "Ready":"False" status (will retry)
	W1010 18:19:50.330258  284725 node_ready.go:57] node "old-k8s-version-141193" has "Ready":"False" status (will retry)
	W1010 18:19:47.891033  280647 pod_ready.go:104] pod "coredns-66bc5c9577-6pgp9" is not "Ready", error: <nil>
	W1010 18:19:50.387313  280647 pod_ready.go:104] pod "coredns-66bc5c9577-6pgp9" is not "Ready", error: <nil>
	W1010 18:19:52.388193  280647 pod_ready.go:104] pod "coredns-66bc5c9577-6pgp9" is not "Ready", error: <nil>
	I1010 18:19:52.934726  297519 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1010 18:19:52.934821  297519 kubeadm.go:318] [preflight] Running pre-flight checks
	I1010 18:19:52.934904  297519 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1010 18:19:52.934950  297519 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1010 18:19:52.934980  297519 kubeadm.go:318] OS: Linux
	I1010 18:19:52.935019  297519 kubeadm.go:318] CGROUPS_CPU: enabled
	I1010 18:19:52.935150  297519 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1010 18:19:52.935227  297519 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1010 18:19:52.935317  297519 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1010 18:19:52.935407  297519 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1010 18:19:52.935503  297519 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1010 18:19:52.935570  297519 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1010 18:19:52.935614  297519 kubeadm.go:318] CGROUPS_IO: enabled
	I1010 18:19:52.935678  297519 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 18:19:52.935810  297519 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 18:19:52.935922  297519 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1010 18:19:52.935995  297519 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 18:19:52.937599  297519 out.go:252]   - Generating certificates and keys ...
	I1010 18:19:52.937702  297519 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1010 18:19:52.937764  297519 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1010 18:19:52.937837  297519 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1010 18:19:52.937913  297519 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1010 18:19:52.938002  297519 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1010 18:19:52.938087  297519 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1010 18:19:52.938167  297519 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1010 18:19:52.938292  297519 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-472518 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1010 18:19:52.938357  297519 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1010 18:19:52.938462  297519 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-472518 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1010 18:19:52.938520  297519 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1010 18:19:52.938577  297519 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1010 18:19:52.938619  297519 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1010 18:19:52.938664  297519 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 18:19:52.938707  297519 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 18:19:52.938757  297519 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1010 18:19:52.938811  297519 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 18:19:52.938889  297519 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 18:19:52.938967  297519 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 18:19:52.939039  297519 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 18:19:52.939133  297519 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 18:19:52.940424  297519 out.go:252]   - Booting up control plane ...
	I1010 18:19:52.940499  297519 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 18:19:52.940563  297519 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 18:19:52.940634  297519 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 18:19:52.940746  297519 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 18:19:52.940821  297519 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1010 18:19:52.940920  297519 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1010 18:19:52.940991  297519 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 18:19:52.941029  297519 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1010 18:19:52.941176  297519 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 18:19:52.941295  297519 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 18:19:52.941352  297519 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001306157s
	I1010 18:19:52.941439  297519 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1010 18:19:52.941512  297519 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1010 18:19:52.941609  297519 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1010 18:19:52.941705  297519 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1010 18:19:52.941826  297519 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.129689745s
	I1010 18:19:52.941895  297519 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.412778766s
	I1010 18:19:52.941976  297519 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.002254768s
	I1010 18:19:52.942146  297519 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 18:19:52.942256  297519 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 18:19:52.942328  297519 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 18:19:52.942531  297519 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-472518 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 18:19:52.942586  297519 kubeadm.go:318] [bootstrap-token] Using token: wv6fn7.57zl6x7bcm0holor
	I1010 18:19:52.943725  297519 out.go:252]   - Configuring RBAC rules ...
	I1010 18:19:52.943845  297519 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 18:19:52.943918  297519 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 18:19:52.944036  297519 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 18:19:52.944194  297519 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 18:19:52.944369  297519 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 18:19:52.944484  297519 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 18:19:52.944615  297519 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 18:19:52.944686  297519 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1010 18:19:52.944763  297519 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1010 18:19:52.944773  297519 kubeadm.go:318] 
	I1010 18:19:52.944857  297519 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1010 18:19:52.944866  297519 kubeadm.go:318] 
	I1010 18:19:52.944983  297519 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1010 18:19:52.944998  297519 kubeadm.go:318] 
	I1010 18:19:52.945044  297519 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1010 18:19:52.945129  297519 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 18:19:52.945180  297519 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 18:19:52.945187  297519 kubeadm.go:318] 
	I1010 18:19:52.945261  297519 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1010 18:19:52.945271  297519 kubeadm.go:318] 
	I1010 18:19:52.945338  297519 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 18:19:52.945347  297519 kubeadm.go:318] 
	I1010 18:19:52.945409  297519 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1010 18:19:52.945513  297519 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 18:19:52.945576  297519 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 18:19:52.945582  297519 kubeadm.go:318] 
	I1010 18:19:52.945658  297519 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 18:19:52.945724  297519 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1010 18:19:52.945729  297519 kubeadm.go:318] 
	I1010 18:19:52.945810  297519 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token wv6fn7.57zl6x7bcm0holor \
	I1010 18:19:52.945904  297519 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:08dcb68c3233bd2646103f50182dc3a0cc6156f6b69cb66c341f613324bcc71f \
	I1010 18:19:52.945950  297519 kubeadm.go:318] 	--control-plane 
	I1010 18:19:52.945959  297519 kubeadm.go:318] 
	I1010 18:19:52.946083  297519 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1010 18:19:52.946097  297519 kubeadm.go:318] 
	I1010 18:19:52.946226  297519 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token wv6fn7.57zl6x7bcm0holor \
	I1010 18:19:52.946383  297519 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:08dcb68c3233bd2646103f50182dc3a0cc6156f6b69cb66c341f613324bcc71f 
	I1010 18:19:52.946399  297519 cni.go:84] Creating CNI manager for ""
	I1010 18:19:52.946407  297519 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:19:52.947577  297519 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1010 18:19:52.948497  297519 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1010 18:19:52.953078  297519 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1010 18:19:52.953098  297519 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1010 18:19:52.968787  297519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1010 18:19:53.257505  297519 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 18:19:53.257532  297519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:53.257570  297519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-472518 minikube.k8s.io/updated_at=2025_10_10T18_19_53_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ad692bf4ab89f0e135b80e730ae25010479ecc46 minikube.k8s.io/name=embed-certs-472518 minikube.k8s.io/primary=true
	I1010 18:19:53.347324  297519 ops.go:34] apiserver oom_adj: -16
	I1010 18:19:53.347341  297519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:53.848262  297519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:52.828952  284725 node_ready.go:49] node "old-k8s-version-141193" is "Ready"
	I1010 18:19:52.828982  284725 node_ready.go:38] duration metric: took 13.503624439s for node "old-k8s-version-141193" to be "Ready" ...
	I1010 18:19:52.829002  284725 api_server.go:52] waiting for apiserver process to appear ...
	I1010 18:19:52.829112  284725 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 18:19:52.842654  284725 api_server.go:72] duration metric: took 13.930344671s to wait for apiserver process to appear ...
	I1010 18:19:52.842683  284725 api_server.go:88] waiting for apiserver healthz status ...
	I1010 18:19:52.842709  284725 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1010 18:19:52.846757  284725 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1010 18:19:52.847971  284725 api_server.go:141] control plane version: v1.28.0
	I1010 18:19:52.847993  284725 api_server.go:131] duration metric: took 5.303517ms to wait for apiserver health ...
	I1010 18:19:52.848004  284725 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 18:19:52.851624  284725 system_pods.go:59] 8 kube-system pods found
	I1010 18:19:52.851650  284725 system_pods.go:61] "coredns-5dd5756b68-qfwck" [d60fe80c-7b6d-46ae-bf0d-1bc8c178ebf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:19:52.851655  284725 system_pods.go:61] "etcd-old-k8s-version-141193" [624eb63b-ba8a-43ad-835d-c604e5375d5b] Running
	I1010 18:19:52.851661  284725 system_pods.go:61] "kindnet-wjlh2" [388273e8-4ad1-4584-b43c-c20634781b0a] Running
	I1010 18:19:52.851666  284725 system_pods.go:61] "kube-apiserver-old-k8s-version-141193" [cae658f3-06d6-498f-a653-5e6f227189ec] Running
	I1010 18:19:52.851672  284725 system_pods.go:61] "kube-controller-manager-old-k8s-version-141193" [9eac32cb-f48f-43c2-afcb-e3fc2a074abf] Running
	I1010 18:19:52.851676  284725 system_pods.go:61] "kube-proxy-n9klp" [7f16dbb3-cc34-448d-91ba-fdeb22a8c5e1] Running
	I1010 18:19:52.851679  284725 system_pods.go:61] "kube-scheduler-old-k8s-version-141193" [54df6abe-d778-4d3c-a74d-bdb5c192042d] Running
	I1010 18:19:52.851684  284725 system_pods.go:61] "storage-provisioner" [ab2fa802-aedc-4f1c-ac3d-56e90d21c38b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 18:19:52.851693  284725 system_pods.go:74] duration metric: took 3.683568ms to wait for pod list to return data ...
	I1010 18:19:52.851699  284725 default_sa.go:34] waiting for default service account to be created ...
	I1010 18:19:52.853935  284725 default_sa.go:45] found service account: "default"
	I1010 18:19:52.853956  284725 default_sa.go:55] duration metric: took 2.249964ms for default service account to be created ...
	I1010 18:19:52.853966  284725 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 18:19:52.857855  284725 system_pods.go:86] 8 kube-system pods found
	I1010 18:19:52.857882  284725 system_pods.go:89] "coredns-5dd5756b68-qfwck" [d60fe80c-7b6d-46ae-bf0d-1bc8c178ebf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:19:52.857887  284725 system_pods.go:89] "etcd-old-k8s-version-141193" [624eb63b-ba8a-43ad-835d-c604e5375d5b] Running
	I1010 18:19:52.857893  284725 system_pods.go:89] "kindnet-wjlh2" [388273e8-4ad1-4584-b43c-c20634781b0a] Running
	I1010 18:19:52.857898  284725 system_pods.go:89] "kube-apiserver-old-k8s-version-141193" [cae658f3-06d6-498f-a653-5e6f227189ec] Running
	I1010 18:19:52.857902  284725 system_pods.go:89] "kube-controller-manager-old-k8s-version-141193" [9eac32cb-f48f-43c2-afcb-e3fc2a074abf] Running
	I1010 18:19:52.857905  284725 system_pods.go:89] "kube-proxy-n9klp" [7f16dbb3-cc34-448d-91ba-fdeb22a8c5e1] Running
	I1010 18:19:52.857908  284725 system_pods.go:89] "kube-scheduler-old-k8s-version-141193" [54df6abe-d778-4d3c-a74d-bdb5c192042d] Running
	I1010 18:19:52.857912  284725 system_pods.go:89] "storage-provisioner" [ab2fa802-aedc-4f1c-ac3d-56e90d21c38b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 18:19:52.857949  284725 retry.go:31] will retry after 256.84422ms: missing components: kube-dns
	I1010 18:19:53.119825  284725 system_pods.go:86] 8 kube-system pods found
	I1010 18:19:53.119854  284725 system_pods.go:89] "coredns-5dd5756b68-qfwck" [d60fe80c-7b6d-46ae-bf0d-1bc8c178ebf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:19:53.119865  284725 system_pods.go:89] "etcd-old-k8s-version-141193" [624eb63b-ba8a-43ad-835d-c604e5375d5b] Running
	I1010 18:19:53.119871  284725 system_pods.go:89] "kindnet-wjlh2" [388273e8-4ad1-4584-b43c-c20634781b0a] Running
	I1010 18:19:53.119875  284725 system_pods.go:89] "kube-apiserver-old-k8s-version-141193" [cae658f3-06d6-498f-a653-5e6f227189ec] Running
	I1010 18:19:53.119879  284725 system_pods.go:89] "kube-controller-manager-old-k8s-version-141193" [9eac32cb-f48f-43c2-afcb-e3fc2a074abf] Running
	I1010 18:19:53.119883  284725 system_pods.go:89] "kube-proxy-n9klp" [7f16dbb3-cc34-448d-91ba-fdeb22a8c5e1] Running
	I1010 18:19:53.119886  284725 system_pods.go:89] "kube-scheduler-old-k8s-version-141193" [54df6abe-d778-4d3c-a74d-bdb5c192042d] Running
	I1010 18:19:53.119892  284725 system_pods.go:89] "storage-provisioner" [ab2fa802-aedc-4f1c-ac3d-56e90d21c38b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 18:19:53.119909  284725 retry.go:31] will retry after 347.42707ms: missing components: kube-dns
	I1010 18:19:53.472355  284725 system_pods.go:86] 8 kube-system pods found
	I1010 18:19:53.472387  284725 system_pods.go:89] "coredns-5dd5756b68-qfwck" [d60fe80c-7b6d-46ae-bf0d-1bc8c178ebf3] Running
	I1010 18:19:53.472395  284725 system_pods.go:89] "etcd-old-k8s-version-141193" [624eb63b-ba8a-43ad-835d-c604e5375d5b] Running
	I1010 18:19:53.472400  284725 system_pods.go:89] "kindnet-wjlh2" [388273e8-4ad1-4584-b43c-c20634781b0a] Running
	I1010 18:19:53.472406  284725 system_pods.go:89] "kube-apiserver-old-k8s-version-141193" [cae658f3-06d6-498f-a653-5e6f227189ec] Running
	I1010 18:19:53.472419  284725 system_pods.go:89] "kube-controller-manager-old-k8s-version-141193" [9eac32cb-f48f-43c2-afcb-e3fc2a074abf] Running
	I1010 18:19:53.472422  284725 system_pods.go:89] "kube-proxy-n9klp" [7f16dbb3-cc34-448d-91ba-fdeb22a8c5e1] Running
	I1010 18:19:53.472427  284725 system_pods.go:89] "kube-scheduler-old-k8s-version-141193" [54df6abe-d778-4d3c-a74d-bdb5c192042d] Running
	I1010 18:19:53.472432  284725 system_pods.go:89] "storage-provisioner" [ab2fa802-aedc-4f1c-ac3d-56e90d21c38b] Running
	I1010 18:19:53.472442  284725 system_pods.go:126] duration metric: took 618.469286ms to wait for k8s-apps to be running ...
	I1010 18:19:53.472457  284725 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 18:19:53.472508  284725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:19:53.486830  284725 system_svc.go:56] duration metric: took 14.364673ms WaitForService to wait for kubelet
	I1010 18:19:53.486860  284725 kubeadm.go:586] duration metric: took 14.574554478s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:19:53.486877  284725 node_conditions.go:102] verifying NodePressure condition ...
	I1010 18:19:53.489587  284725 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1010 18:19:53.489610  284725 node_conditions.go:123] node cpu capacity is 8
	I1010 18:19:53.489630  284725 node_conditions.go:105] duration metric: took 2.747794ms to run NodePressure ...
	I1010 18:19:53.489645  284725 start.go:241] waiting for startup goroutines ...
	I1010 18:19:53.489657  284725 start.go:246] waiting for cluster config update ...
	I1010 18:19:53.489684  284725 start.go:255] writing updated cluster config ...
	I1010 18:19:53.489914  284725 ssh_runner.go:195] Run: rm -f paused
	I1010 18:19:53.493783  284725 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 18:19:53.497968  284725 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-qfwck" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:53.502536  284725 pod_ready.go:94] pod "coredns-5dd5756b68-qfwck" is "Ready"
	I1010 18:19:53.502555  284725 pod_ready.go:86] duration metric: took 4.567345ms for pod "coredns-5dd5756b68-qfwck" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:53.505810  284725 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:53.510096  284725 pod_ready.go:94] pod "etcd-old-k8s-version-141193" is "Ready"
	I1010 18:19:53.510121  284725 pod_ready.go:86] duration metric: took 4.29166ms for pod "etcd-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:53.512958  284725 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:53.518200  284725 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-141193" is "Ready"
	I1010 18:19:53.518225  284725 pod_ready.go:86] duration metric: took 5.242025ms for pod "kube-apiserver-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:53.521730  284725 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:53.897736  284725 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-141193" is "Ready"
	I1010 18:19:53.897762  284725 pod_ready.go:86] duration metric: took 376.012519ms for pod "kube-controller-manager-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:54.098310  284725 pod_ready.go:83] waiting for pod "kube-proxy-n9klp" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:54.498776  284725 pod_ready.go:94] pod "kube-proxy-n9klp" is "Ready"
	I1010 18:19:54.498801  284725 pod_ready.go:86] duration metric: took 400.468138ms for pod "kube-proxy-n9klp" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:54.698416  284725 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:55.099363  284725 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-141193" is "Ready"
	I1010 18:19:55.099406  284725 pod_ready.go:86] duration metric: took 400.966691ms for pod "kube-scheduler-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:55.099420  284725 pod_ready.go:40] duration metric: took 1.605614614s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 18:19:55.146513  284725 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1010 18:19:55.148157  284725 out.go:203] 
	W1010 18:19:55.149271  284725 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1010 18:19:55.150341  284725 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1010 18:19:55.151783  284725 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-141193" cluster and "default" namespace by default
	I1010 18:19:51.116921  290755 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1010 18:19:51.123354  290755 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1010 18:19:51.123376  290755 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1010 18:19:51.142784  290755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1010 18:19:51.411166  290755 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 18:19:51.411272  290755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:51.411282  290755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-556024 minikube.k8s.io/updated_at=2025_10_10T18_19_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ad692bf4ab89f0e135b80e730ae25010479ecc46 minikube.k8s.io/name=no-preload-556024 minikube.k8s.io/primary=true
	I1010 18:19:51.425411  290755 ops.go:34] apiserver oom_adj: -16
	I1010 18:19:51.502620  290755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:52.003160  290755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:52.502895  290755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:53.003785  290755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:53.503248  290755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:54.003396  290755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:54.503700  290755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:55.002752  290755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:55.503206  290755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:56.003274  290755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:56.077623  290755 kubeadm.go:1113] duration metric: took 4.666423167s to wait for elevateKubeSystemPrivileges
	I1010 18:19:56.077655  290755 kubeadm.go:402] duration metric: took 16.287277857s to StartCluster
	I1010 18:19:56.077673  290755 settings.go:142] acquiring lock: {Name:mk32701f7c6313a55b8740f0862889585a36e8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:56.077767  290755 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:19:56.079085  290755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:56.079348  290755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1010 18:19:56.079361  290755 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:19:56.079435  290755 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 18:19:56.079520  290755 addons.go:69] Setting storage-provisioner=true in profile "no-preload-556024"
	I1010 18:19:56.079526  290755 config.go:182] Loaded profile config "no-preload-556024": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:19:56.079549  290755 addons.go:69] Setting default-storageclass=true in profile "no-preload-556024"
	I1010 18:19:56.079558  290755 addons.go:238] Setting addon storage-provisioner=true in "no-preload-556024"
	I1010 18:19:56.079573  290755 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-556024"
	I1010 18:19:56.079594  290755 host.go:66] Checking if "no-preload-556024" exists ...
	I1010 18:19:56.079913  290755 cli_runner.go:164] Run: docker container inspect no-preload-556024 --format={{.State.Status}}
	I1010 18:19:56.080093  290755 cli_runner.go:164] Run: docker container inspect no-preload-556024 --format={{.State.Status}}
	I1010 18:19:56.081817  290755 out.go:179] * Verifying Kubernetes components...
	I1010 18:19:56.082925  290755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:19:56.103942  290755 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:19:56.105022  290755 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:19:56.105043  290755 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 18:19:56.105124  290755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:19:56.105296  290755 addons.go:238] Setting addon default-storageclass=true in "no-preload-556024"
	I1010 18:19:56.105339  290755 host.go:66] Checking if "no-preload-556024" exists ...
	I1010 18:19:56.105808  290755 cli_runner.go:164] Run: docker container inspect no-preload-556024 --format={{.State.Status}}
	I1010 18:19:54.888315  280647 pod_ready.go:94] pod "coredns-66bc5c9577-6pgp9" is "Ready"
	I1010 18:19:54.888342  280647 pod_ready.go:86] duration metric: took 32.506196324s for pod "coredns-66bc5c9577-6pgp9" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:54.888354  280647 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hwdcx" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:54.890145  280647 pod_ready.go:99] pod "coredns-66bc5c9577-hwdcx" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-hwdcx" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-hwdcx" not found
	I1010 18:19:54.890165  280647 pod_ready.go:86] duration metric: took 1.796566ms for pod "coredns-66bc5c9577-hwdcx" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:54.892554  280647 pod_ready.go:83] waiting for pod "etcd-bridge-078032" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:54.896572  280647 pod_ready.go:94] pod "etcd-bridge-078032" is "Ready"
	I1010 18:19:54.896593  280647 pod_ready.go:86] duration metric: took 4.023309ms for pod "etcd-bridge-078032" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:54.898779  280647 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-078032" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:54.902533  280647 pod_ready.go:94] pod "kube-apiserver-bridge-078032" is "Ready"
	I1010 18:19:54.902552  280647 pod_ready.go:86] duration metric: took 3.751818ms for pod "kube-apiserver-bridge-078032" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:54.904462  280647 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-078032" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:55.290421  280647 pod_ready.go:94] pod "kube-controller-manager-bridge-078032" is "Ready"
	I1010 18:19:55.290453  280647 pod_ready.go:86] duration metric: took 385.959733ms for pod "kube-controller-manager-bridge-078032" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:55.486162  280647 pod_ready.go:83] waiting for pod "kube-proxy-87h4s" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:55.887342  280647 pod_ready.go:94] pod "kube-proxy-87h4s" is "Ready"
	I1010 18:19:55.887365  280647 pod_ready.go:86] duration metric: took 401.175115ms for pod "kube-proxy-87h4s" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:56.087950  280647 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-078032" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:56.486783  280647 pod_ready.go:94] pod "kube-scheduler-bridge-078032" is "Ready"
	I1010 18:19:56.486816  280647 pod_ready.go:86] duration metric: took 398.835264ms for pod "kube-scheduler-bridge-078032" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:56.486831  280647 pod_ready.go:40] duration metric: took 34.109526609s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 18:19:56.540430  280647 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1010 18:19:56.541841  280647 out.go:179] * Done! kubectl is now configured to use "bridge-078032" cluster and "default" namespace by default
	W1010 18:19:56.548406  280647 root.go:91] failed to log command end to audit: failed to find a log row with id equals to faab746a-85bd-4429-8bb4-e2a5039cb262
	I1010 18:19:56.138224  290755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/no-preload-556024/id_rsa Username:docker}
	I1010 18:19:56.140441  290755 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 18:19:56.140465  290755 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 18:19:56.140522  290755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:19:56.167454  290755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/no-preload-556024/id_rsa Username:docker}
	I1010 18:19:56.183238  290755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1010 18:19:56.227730  290755 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:19:56.329192  290755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 18:19:56.356571  290755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:19:56.360115  290755 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1010 18:19:56.364276  290755 node_ready.go:35] waiting up to 6m0s for node "no-preload-556024" to be "Ready" ...
	I1010 18:19:56.686727  290755 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1010 18:19:54.348043  297519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:54.848239  297519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:55.347641  297519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:55.848180  297519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:56.348383  297519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:56.847639  297519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:57.348010  297519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:57.848274  297519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:57.971252  297519 kubeadm.go:1113] duration metric: took 4.713784514s to wait for elevateKubeSystemPrivileges
	I1010 18:19:57.971316  297519 kubeadm.go:402] duration metric: took 15.261769903s to StartCluster
	I1010 18:19:57.971338  297519 settings.go:142] acquiring lock: {Name:mk32701f7c6313a55b8740f0862889585a36e8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:57.971441  297519 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:19:57.975955  297519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:57.976886  297519 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1010 18:19:57.977140  297519 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:19:57.977243  297519 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 18:19:57.977828  297519 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-472518"
	I1010 18:19:57.977853  297519 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-472518"
	I1010 18:19:57.977880  297519 host.go:66] Checking if "embed-certs-472518" exists ...
	I1010 18:19:57.977571  297519 config.go:182] Loaded profile config "embed-certs-472518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:19:57.978298  297519 addons.go:69] Setting default-storageclass=true in profile "embed-certs-472518"
	I1010 18:19:57.978314  297519 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-472518"
	I1010 18:19:57.978667  297519 cli_runner.go:164] Run: docker container inspect embed-certs-472518 --format={{.State.Status}}
	I1010 18:19:57.979381  297519 cli_runner.go:164] Run: docker container inspect embed-certs-472518 --format={{.State.Status}}
	I1010 18:19:57.980035  297519 out.go:179] * Verifying Kubernetes components...
	I1010 18:19:57.981300  297519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:19:58.019774  297519 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:19:58.021708  297519 addons.go:238] Setting addon default-storageclass=true in "embed-certs-472518"
	I1010 18:19:58.021757  297519 host.go:66] Checking if "embed-certs-472518" exists ...
	I1010 18:19:58.022356  297519 cli_runner.go:164] Run: docker container inspect embed-certs-472518 --format={{.State.Status}}
	I1010 18:19:58.025890  297519 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:19:58.025919  297519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 18:19:58.025975  297519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:19:58.055619  297519 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 18:19:58.055648  297519 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 18:19:58.055731  297519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:19:58.065831  297519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:19:58.090621  297519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:19:58.199090  297519 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1010 18:19:58.236956  297519 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:19:58.376816  297519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 18:19:58.414355  297519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:19:58.569341  297519 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1010 18:19:58.570333  297519 node_ready.go:35] waiting up to 6m0s for node "embed-certs-472518" to be "Ready" ...
	I1010 18:19:58.861536  297519 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1010 18:19:58.862652  297519 addons.go:514] duration metric: took 885.405072ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1010 18:19:59.077258  297519 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-472518" context rescaled to 1 replicas
	I1010 18:19:56.690085  290755 addons.go:514] duration metric: took 610.6488ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1010 18:19:56.865441  290755 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-556024" context rescaled to 1 replicas
	W1010 18:19:58.375084  290755 node_ready.go:57] node "no-preload-556024" has "Ready":"False" status (will retry)
	W1010 18:20:00.867226  290755 node_ready.go:57] node "no-preload-556024" has "Ready":"False" status (will retry)
	W1010 18:20:00.573915  297519 node_ready.go:57] node "embed-certs-472518" has "Ready":"False" status (will retry)
	W1010 18:20:03.074483  297519 node_ready.go:57] node "embed-certs-472518" has "Ready":"False" status (will retry)
	W1010 18:20:02.867471  290755 node_ready.go:57] node "no-preload-556024" has "Ready":"False" status (will retry)
	W1010 18:20:04.868110  290755 node_ready.go:57] node "no-preload-556024" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 10 18:19:53 old-k8s-version-141193 crio[779]: time="2025-10-10T18:19:53.044194244Z" level=info msg="Starting container: b37ebd16f6a09ae6c87ce41ca1724dc67108ff8c87457840f6703e05e5651e10" id=e2cf9cd1-a0fd-4ea5-96a0-7dd439c3439b name=/runtime.v1.RuntimeService/StartContainer
	Oct 10 18:19:53 old-k8s-version-141193 crio[779]: time="2025-10-10T18:19:53.046229386Z" level=info msg="Started container" PID=2119 containerID=b37ebd16f6a09ae6c87ce41ca1724dc67108ff8c87457840f6703e05e5651e10 description=kube-system/coredns-5dd5756b68-qfwck/coredns id=e2cf9cd1-a0fd-4ea5-96a0-7dd439c3439b name=/runtime.v1.RuntimeService/StartContainer sandboxID=0c4d8496b0f00ecb969194f4f2bd380df25aa9c290fcf390d3d4f61003c48a55
	Oct 10 18:19:55 old-k8s-version-141193 crio[779]: time="2025-10-10T18:19:55.602202026Z" level=info msg="Running pod sandbox: default/busybox/POD" id=b19ba26e-d7fc-454d-9e00-495136fe504e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 10 18:19:55 old-k8s-version-141193 crio[779]: time="2025-10-10T18:19:55.602384813Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:19:55 old-k8s-version-141193 crio[779]: time="2025-10-10T18:19:55.607780838Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c855cb1c2a17e67c1d095cec779f85fd502d82ae5d87181a39f1af61d268c106 UID:a052a617-10eb-4b35-8da3-41ed530a6878 NetNS:/var/run/netns/e08d71e4-24f6-42f4-b0c3-d35a433364a5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0008703e8}] Aliases:map[]}"
	Oct 10 18:19:55 old-k8s-version-141193 crio[779]: time="2025-10-10T18:19:55.607826771Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 10 18:19:55 old-k8s-version-141193 crio[779]: time="2025-10-10T18:19:55.617692163Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c855cb1c2a17e67c1d095cec779f85fd502d82ae5d87181a39f1af61d268c106 UID:a052a617-10eb-4b35-8da3-41ed530a6878 NetNS:/var/run/netns/e08d71e4-24f6-42f4-b0c3-d35a433364a5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0008703e8}] Aliases:map[]}"
	Oct 10 18:19:55 old-k8s-version-141193 crio[779]: time="2025-10-10T18:19:55.617834035Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 10 18:19:55 old-k8s-version-141193 crio[779]: time="2025-10-10T18:19:55.618647336Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 10 18:19:55 old-k8s-version-141193 crio[779]: time="2025-10-10T18:19:55.619695883Z" level=info msg="Ran pod sandbox c855cb1c2a17e67c1d095cec779f85fd502d82ae5d87181a39f1af61d268c106 with infra container: default/busybox/POD" id=b19ba26e-d7fc-454d-9e00-495136fe504e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 10 18:19:55 old-k8s-version-141193 crio[779]: time="2025-10-10T18:19:55.62103356Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3ba729ab-22bb-490a-a84d-f2cbd625472f name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:19:55 old-k8s-version-141193 crio[779]: time="2025-10-10T18:19:55.621268172Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=3ba729ab-22bb-490a-a84d-f2cbd625472f name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:19:55 old-k8s-version-141193 crio[779]: time="2025-10-10T18:19:55.621319242Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=3ba729ab-22bb-490a-a84d-f2cbd625472f name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:19:55 old-k8s-version-141193 crio[779]: time="2025-10-10T18:19:55.621891144Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7e1a9d6e-4f52-4b2d-bf63-fafb15f5a0fe name=/runtime.v1.ImageService/PullImage
	Oct 10 18:19:55 old-k8s-version-141193 crio[779]: time="2025-10-10T18:19:55.626169958Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 10 18:19:58 old-k8s-version-141193 crio[779]: time="2025-10-10T18:19:58.014222395Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=7e1a9d6e-4f52-4b2d-bf63-fafb15f5a0fe name=/runtime.v1.ImageService/PullImage
	Oct 10 18:19:58 old-k8s-version-141193 crio[779]: time="2025-10-10T18:19:58.016370251Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4383b4d5-ffa4-42ec-9e6d-4070038e8907 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:19:58 old-k8s-version-141193 crio[779]: time="2025-10-10T18:19:58.018961159Z" level=info msg="Creating container: default/busybox/busybox" id=d5a64ef2-c456-470d-aa52-7f5aba0ab5ed name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:19:58 old-k8s-version-141193 crio[779]: time="2025-10-10T18:19:58.020360825Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:19:58 old-k8s-version-141193 crio[779]: time="2025-10-10T18:19:58.029768915Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:19:58 old-k8s-version-141193 crio[779]: time="2025-10-10T18:19:58.030977405Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:19:58 old-k8s-version-141193 crio[779]: time="2025-10-10T18:19:58.101825409Z" level=info msg="Created container 14773565935bf7e63e082b1688c5bb154bec76db6cc10139f9d9d4b558dfdd6f: default/busybox/busybox" id=d5a64ef2-c456-470d-aa52-7f5aba0ab5ed name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:19:58 old-k8s-version-141193 crio[779]: time="2025-10-10T18:19:58.102831196Z" level=info msg="Starting container: 14773565935bf7e63e082b1688c5bb154bec76db6cc10139f9d9d4b558dfdd6f" id=3c81bb7a-2c17-442a-b84a-c3f8eb344c53 name=/runtime.v1.RuntimeService/StartContainer
	Oct 10 18:19:58 old-k8s-version-141193 crio[779]: time="2025-10-10T18:19:58.105315126Z" level=info msg="Started container" PID=2191 containerID=14773565935bf7e63e082b1688c5bb154bec76db6cc10139f9d9d4b558dfdd6f description=default/busybox/busybox id=3c81bb7a-2c17-442a-b84a-c3f8eb344c53 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c855cb1c2a17e67c1d095cec779f85fd502d82ae5d87181a39f1af61d268c106
	Oct 10 18:20:05 old-k8s-version-141193 crio[779]: time="2025-10-10T18:20:05.385566278Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	14773565935bf       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   c855cb1c2a17e       busybox                                          default
	b37ebd16f6a09       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 seconds ago      Running             coredns                   0                   0c4d8496b0f00       coredns-5dd5756b68-qfwck                         kube-system
	a1754a49e00e8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   046a66c12806e       storage-provisioner                              kube-system
	d7524ae838f7f       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    24 seconds ago      Running             kindnet-cni               0                   31cff41f2ea8d       kindnet-wjlh2                                    kube-system
	628a55cc8eb75       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      27 seconds ago      Running             kube-proxy                0                   f86ec6f50df46       kube-proxy-n9klp                                 kube-system
	eaa08a2c3bec6       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      45 seconds ago      Running             kube-scheduler            0                   d7c07362f52c1       kube-scheduler-old-k8s-version-141193            kube-system
	8cdd7e904d748       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      45 seconds ago      Running             kube-apiserver            0                   7659660c93cfe       kube-apiserver-old-k8s-version-141193            kube-system
	d908d2900e007       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      45 seconds ago      Running             etcd                      0                   abd6cec271bdc       etcd-old-k8s-version-141193                      kube-system
	6715172df948b       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      45 seconds ago      Running             kube-controller-manager   0                   c4248d51e7064       kube-controller-manager-old-k8s-version-141193   kube-system
	
	
	==> coredns [b37ebd16f6a09ae6c87ce41ca1724dc67108ff8c87457840f6703e05e5651e10] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34481 - 5623 "HINFO IN 5428006412600672249.1666413526635748735. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.035406213s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-141193
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-141193
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad692bf4ab89f0e135b80e730ae25010479ecc46
	                    minikube.k8s.io/name=old-k8s-version-141193
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_10T18_19_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 10 Oct 2025 18:19:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-141193
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 10 Oct 2025 18:19:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 10 Oct 2025 18:19:56 +0000   Fri, 10 Oct 2025 18:19:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 10 Oct 2025 18:19:56 +0000   Fri, 10 Oct 2025 18:19:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 10 Oct 2025 18:19:56 +0000   Fri, 10 Oct 2025 18:19:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 10 Oct 2025 18:19:56 +0000   Fri, 10 Oct 2025 18:19:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-141193
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 6694834041ede3e9eb1b67e168e90e0c
	  System UUID:                8f8bdf4a-f8cb-42ff-aa21-c2ad268c8723
	  Boot ID:                    830c8438-99e6-48ba-b543-66e651cad0c8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-qfwck                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-old-k8s-version-141193                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         40s
	  kube-system                 kindnet-wjlh2                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-141193             250m (3%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-old-k8s-version-141193    200m (2%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-n9klp                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-141193             100m (1%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 40s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s   kubelet          Node old-k8s-version-141193 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s   kubelet          Node old-k8s-version-141193 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s   kubelet          Node old-k8s-version-141193 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s   node-controller  Node old-k8s-version-141193 event: Registered Node old-k8s-version-141193 in Controller
	  Normal  NodeReady                14s   kubelet          Node old-k8s-version-141193 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[Oct10 17:34] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: 6a d3 27 bb ba 70 82 f0 d1 5c 58 83 08 00
	[Oct10 18:18] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 95 0c 3e 92 2e 08 06
	[  +0.052845] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 a5 06 76 2d e3 08 06
	[ +11.354316] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff fa c6 ff 04 55 d6 08 06
	[  +7.101927] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e6 9b 73 27 8c 80 08 06
	[  +0.000350] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 a5 06 76 2d e3 08 06
	[  +6.287191] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 27 2d 28 d6 46 08 06
	[  +0.000293] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa c6 ff 04 55 d6 08 06
	[Oct10 18:19] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 8c 22 f6 6b cf 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e 29 bf 13 20 f9 08 06
	[ +15.511156] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e d6 74 aa 27 d0 08 06
	[  +0.008495] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 af 05 d4 db d1 08 06
	
	
	==> etcd [d908d2900e007f6fe84fd865687e87ba34aa2b0a2e102c67ce4dca9d1afa677f] <==
	{"level":"info","ts":"2025-10-10T18:19:21.228992Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-10T18:19:21.229015Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-10T18:19:21.229006Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-10T18:19:21.229119Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-141193 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-10T18:19:21.231116Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-10T18:19:21.23145Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-10T18:19:21.232273Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-10T18:19:21.233364Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-10T18:19:33.679431Z","caller":"traceutil/trace.go:171","msg":"trace[360058654] transaction","detail":"{read_only:false; response_revision:271; number_of_response:1; }","duration":"102.47226ms","start":"2025-10-10T18:19:33.576938Z","end":"2025-10-10T18:19:33.67941Z","steps":["trace[360058654] 'process raft request'  (duration: 102.351489ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-10T18:19:34.622475Z","caller":"traceutil/trace.go:171","msg":"trace[1516780223] linearizableReadLoop","detail":"{readStateIndex:283; appliedIndex:282; }","duration":"227.724246ms","start":"2025-10-10T18:19:34.394731Z","end":"2025-10-10T18:19:34.622455Z","steps":["trace[1516780223] 'read index received'  (duration: 227.549304ms)","trace[1516780223] 'applied index is now lower than readState.Index'  (duration: 174.31µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-10T18:19:34.62259Z","caller":"traceutil/trace.go:171","msg":"trace[402973878] transaction","detail":"{read_only:false; response_revision:272; number_of_response:1; }","duration":"244.025673ms","start":"2025-10-10T18:19:34.37854Z","end":"2025-10-10T18:19:34.622566Z","steps":["trace[402973878] 'process raft request'  (duration: 243.808132ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-10T18:19:34.622647Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"227.881101ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-10T18:19:34.622689Z","caller":"traceutil/trace.go:171","msg":"trace[1827314089] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:272; }","duration":"227.975172ms","start":"2025-10-10T18:19:34.394702Z","end":"2025-10-10T18:19:34.622677Z","steps":["trace[1827314089] 'agreement among raft nodes before linearized reading'  (duration: 227.837354ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-10T18:19:34.929352Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.821312ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.85.2\" ","response":"range_response_count:1 size:131"}
	{"level":"warn","ts":"2025-10-10T18:19:34.929368Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.449284ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-10T18:19:34.929413Z","caller":"traceutil/trace.go:171","msg":"trace[158285019] range","detail":"{range_begin:/registry/masterleases/192.168.85.2; range_end:; response_count:1; response_revision:272; }","duration":"117.900336ms","start":"2025-10-10T18:19:34.811501Z","end":"2025-10-10T18:19:34.929401Z","steps":["trace[158285019] 'range keys from in-memory index tree'  (duration: 117.69474ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-10T18:19:34.92942Z","caller":"traceutil/trace.go:171","msg":"trace[2067180050] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:272; }","duration":"117.533603ms","start":"2025-10-10T18:19:34.811875Z","end":"2025-10-10T18:19:34.929409Z","steps":["trace[2067180050] 'range keys from in-memory index tree'  (duration: 117.373762ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-10T18:19:35.057694Z","caller":"traceutil/trace.go:171","msg":"trace[400863021] linearizableReadLoop","detail":"{readStateIndex:284; appliedIndex:283; }","duration":"125.876767ms","start":"2025-10-10T18:19:34.931796Z","end":"2025-10-10T18:19:35.057673Z","steps":["trace[400863021] 'read index received'  (duration: 125.67532ms)","trace[400863021] 'applied index is now lower than readState.Index'  (duration: 199.978µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-10T18:19:35.057815Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.020813ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-old-k8s-version-141193\" ","response":"range_response_count:1 size:5405"}
	{"level":"info","ts":"2025-10-10T18:19:35.057839Z","caller":"traceutil/trace.go:171","msg":"trace[1165210155] range","detail":"{range_begin:/registry/pods/kube-system/etcd-old-k8s-version-141193; range_end:; response_count:1; response_revision:272; }","duration":"126.063944ms","start":"2025-10-10T18:19:34.931767Z","end":"2025-10-10T18:19:35.057831Z","steps":["trace[1165210155] 'agreement among raft nodes before linearized reading'  (duration: 125.981981ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-10T18:19:35.183469Z","caller":"traceutil/trace.go:171","msg":"trace[2053303310] transaction","detail":"{read_only:false; response_revision:273; number_of_response:1; }","duration":"124.950055ms","start":"2025-10-10T18:19:35.058503Z","end":"2025-10-10T18:19:35.183453Z","steps":["trace[2053303310] 'process raft request'  (duration: 123.657174ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-10T18:19:35.274652Z","caller":"traceutil/trace.go:171","msg":"trace[591639903] linearizableReadLoop","detail":"{readStateIndex:286; appliedIndex:284; }","duration":"120.975121ms","start":"2025-10-10T18:19:35.15365Z","end":"2025-10-10T18:19:35.274625Z","steps":["trace[591639903] 'read index received'  (duration: 28.520219ms)","trace[591639903] 'applied index is now lower than readState.Index'  (duration: 92.454355ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-10T18:19:35.274759Z","caller":"traceutil/trace.go:171","msg":"trace[864010934] transaction","detail":"{read_only:false; response_revision:274; number_of_response:1; }","duration":"210.053434ms","start":"2025-10-10T18:19:35.064672Z","end":"2025-10-10T18:19:35.274726Z","steps":["trace[864010934] 'process raft request'  (duration: 209.818396ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-10T18:19:35.274896Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.246482ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-10T18:19:35.274999Z","caller":"traceutil/trace.go:171","msg":"trace[1911669053] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:274; }","duration":"121.366742ms","start":"2025-10-10T18:19:35.153619Z","end":"2025-10-10T18:19:35.274986Z","steps":["trace[1911669053] 'agreement among raft nodes before linearized reading'  (duration: 121.119585ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:20:06 up  1:02,  0 user,  load average: 4.84, 4.18, 2.64
	Linux old-k8s-version-141193 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d7524ae838f7fe0ea7e2fc660c016b0afe243d1df69d7a4c833753db21fdef9b] <==
	I1010 18:19:42.143685       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1010 18:19:42.144016       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1010 18:19:42.144199       1 main.go:148] setting mtu 1500 for CNI 
	I1010 18:19:42.144221       1 main.go:178] kindnetd IP family: "ipv4"
	I1010 18:19:42.144249       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-10T18:19:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1010 18:19:42.346983       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1010 18:19:42.441010       1 controller.go:381] "Waiting for informer caches to sync"
	I1010 18:19:42.441097       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1010 18:19:42.441327       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1010 18:19:42.841301       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1010 18:19:42.841338       1 metrics.go:72] Registering metrics
	I1010 18:19:42.841423       1 controller.go:711] "Syncing nftables rules"
	I1010 18:19:52.355170       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1010 18:19:52.355243       1 main.go:301] handling current node
	I1010 18:20:02.349134       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1010 18:20:02.349175       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8cdd7e904d74810089dce17ee1c628d02cfba350d8e88fee4361e91fa313a884] <==
	I1010 18:19:23.200047       1 autoregister_controller.go:141] Starting autoregister controller
	I1010 18:19:23.200466       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1010 18:19:23.200482       1 cache.go:39] Caches are synced for autoregister controller
	I1010 18:19:23.200287       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1010 18:19:23.200557       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1010 18:19:23.202711       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1010 18:19:23.204259       1 shared_informer.go:318] Caches are synced for configmaps
	I1010 18:19:23.207257       1 controller.go:624] quota admission added evaluator for: namespaces
	I1010 18:19:23.222711       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1010 18:19:23.265794       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1010 18:19:24.106391       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1010 18:19:24.114677       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1010 18:19:24.114900       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1010 18:19:24.659319       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1010 18:19:24.709353       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1010 18:19:24.818975       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1010 18:19:24.827773       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1010 18:19:24.829245       1 controller.go:624] quota admission added evaluator for: endpoints
	I1010 18:19:24.834350       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1010 18:19:25.145165       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1010 18:19:26.262732       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1010 18:19:26.282682       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1010 18:19:26.294549       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1010 18:19:38.752874       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1010 18:19:38.801765       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [6715172df948b0e42b142119b5eb4331c95cc7604dcee1b531146ba498f7b0dc] <==
	I1010 18:19:38.144433       1 shared_informer.go:318] Caches are synced for HPA
	I1010 18:19:38.149280       1 shared_informer.go:318] Caches are synced for service account
	I1010 18:19:38.205687       1 shared_informer.go:318] Caches are synced for resource quota
	I1010 18:19:38.525597       1 shared_informer.go:318] Caches are synced for garbage collector
	I1010 18:19:38.525635       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1010 18:19:38.528734       1 shared_informer.go:318] Caches are synced for garbage collector
	I1010 18:19:38.756487       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1010 18:19:38.809911       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-n9klp"
	I1010 18:19:38.812167       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-wjlh2"
	I1010 18:19:39.012316       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-qfwck"
	I1010 18:19:39.021574       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-xjjcd"
	I1010 18:19:39.046385       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="290.056026ms"
	I1010 18:19:39.062418       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.946115ms"
	I1010 18:19:39.062533       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.567µs"
	I1010 18:19:39.062685       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.983µs"
	I1010 18:19:39.363503       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1010 18:19:39.377338       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-xjjcd"
	I1010 18:19:39.397348       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="34.144365ms"
	I1010 18:19:39.407634       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.219178ms"
	I1010 18:19:39.407750       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="70.466µs"
	I1010 18:19:52.676930       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="98.694µs"
	I1010 18:19:52.688223       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.536µs"
	I1010 18:19:52.976438       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1010 18:19:53.453013       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.23104ms"
	I1010 18:19:53.453158       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="101.034µs"
	
	
	==> kube-proxy [628a55cc8eb75bd5a89084047609458e33a60400b8b2a11006aa750adb5488b2] <==
	I1010 18:19:39.239788       1 server_others.go:69] "Using iptables proxy"
	I1010 18:19:39.254466       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1010 18:19:39.287604       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1010 18:19:39.292396       1 server_others.go:152] "Using iptables Proxier"
	I1010 18:19:39.292594       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1010 18:19:39.292649       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1010 18:19:39.292721       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1010 18:19:39.293046       1 server.go:846] "Version info" version="v1.28.0"
	I1010 18:19:39.293534       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 18:19:39.294618       1 config.go:188] "Starting service config controller"
	I1010 18:19:39.294709       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1010 18:19:39.295115       1 config.go:315] "Starting node config controller"
	I1010 18:19:39.295127       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1010 18:19:39.301004       1 config.go:97] "Starting endpoint slice config controller"
	I1010 18:19:39.301285       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1010 18:19:39.394869       1 shared_informer.go:318] Caches are synced for service config
	I1010 18:19:39.402004       1 shared_informer.go:318] Caches are synced for node config
	I1010 18:19:39.402065       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [eaa08a2c3bec6c9c14373b23e8339e68bce4fffd4e8d9e1b3710157ed2748b23] <==
	W1010 18:19:23.197655       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1010 18:19:23.198198       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1010 18:19:23.197031       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1010 18:19:23.198214       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1010 18:19:24.011976       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1010 18:19:24.012182       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1010 18:19:24.029115       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1010 18:19:24.029163       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1010 18:19:24.045304       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1010 18:19:24.045342       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1010 18:19:24.057690       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1010 18:19:24.057748       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1010 18:19:24.080916       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1010 18:19:24.080956       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1010 18:19:24.103576       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1010 18:19:24.104119       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1010 18:19:24.160515       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1010 18:19:24.161078       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1010 18:19:24.331811       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1010 18:19:24.331976       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1010 18:19:24.374420       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1010 18:19:24.374462       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1010 18:19:24.489983       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1010 18:19:24.490022       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1010 18:19:27.189214       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 10 18:19:38 old-k8s-version-141193 kubelet[1383]: I1010 18:19:38.107181    1383 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 10 18:19:38 old-k8s-version-141193 kubelet[1383]: I1010 18:19:38.108072    1383 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 10 18:19:38 old-k8s-version-141193 kubelet[1383]: I1010 18:19:38.814975    1383 topology_manager.go:215] "Topology Admit Handler" podUID="7f16dbb3-cc34-448d-91ba-fdeb22a8c5e1" podNamespace="kube-system" podName="kube-proxy-n9klp"
	Oct 10 18:19:38 old-k8s-version-141193 kubelet[1383]: I1010 18:19:38.816509    1383 topology_manager.go:215] "Topology Admit Handler" podUID="388273e8-4ad1-4584-b43c-c20634781b0a" podNamespace="kube-system" podName="kindnet-wjlh2"
	Oct 10 18:19:38 old-k8s-version-141193 kubelet[1383]: I1010 18:19:38.912682    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7f16dbb3-cc34-448d-91ba-fdeb22a8c5e1-kube-proxy\") pod \"kube-proxy-n9klp\" (UID: \"7f16dbb3-cc34-448d-91ba-fdeb22a8c5e1\") " pod="kube-system/kube-proxy-n9klp"
	Oct 10 18:19:38 old-k8s-version-141193 kubelet[1383]: I1010 18:19:38.912800    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffxkm\" (UniqueName: \"kubernetes.io/projected/7f16dbb3-cc34-448d-91ba-fdeb22a8c5e1-kube-api-access-ffxkm\") pod \"kube-proxy-n9klp\" (UID: \"7f16dbb3-cc34-448d-91ba-fdeb22a8c5e1\") " pod="kube-system/kube-proxy-n9klp"
	Oct 10 18:19:38 old-k8s-version-141193 kubelet[1383]: I1010 18:19:38.912844    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f16dbb3-cc34-448d-91ba-fdeb22a8c5e1-lib-modules\") pod \"kube-proxy-n9klp\" (UID: \"7f16dbb3-cc34-448d-91ba-fdeb22a8c5e1\") " pod="kube-system/kube-proxy-n9klp"
	Oct 10 18:19:38 old-k8s-version-141193 kubelet[1383]: I1010 18:19:38.912875    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/388273e8-4ad1-4584-b43c-c20634781b0a-lib-modules\") pod \"kindnet-wjlh2\" (UID: \"388273e8-4ad1-4584-b43c-c20634781b0a\") " pod="kube-system/kindnet-wjlh2"
	Oct 10 18:19:38 old-k8s-version-141193 kubelet[1383]: I1010 18:19:38.912907    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6n98\" (UniqueName: \"kubernetes.io/projected/388273e8-4ad1-4584-b43c-c20634781b0a-kube-api-access-v6n98\") pod \"kindnet-wjlh2\" (UID: \"388273e8-4ad1-4584-b43c-c20634781b0a\") " pod="kube-system/kindnet-wjlh2"
	Oct 10 18:19:38 old-k8s-version-141193 kubelet[1383]: I1010 18:19:38.912999    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/388273e8-4ad1-4584-b43c-c20634781b0a-xtables-lock\") pod \"kindnet-wjlh2\" (UID: \"388273e8-4ad1-4584-b43c-c20634781b0a\") " pod="kube-system/kindnet-wjlh2"
	Oct 10 18:19:38 old-k8s-version-141193 kubelet[1383]: I1010 18:19:38.913045    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f16dbb3-cc34-448d-91ba-fdeb22a8c5e1-xtables-lock\") pod \"kube-proxy-n9klp\" (UID: \"7f16dbb3-cc34-448d-91ba-fdeb22a8c5e1\") " pod="kube-system/kube-proxy-n9klp"
	Oct 10 18:19:38 old-k8s-version-141193 kubelet[1383]: I1010 18:19:38.913089    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/388273e8-4ad1-4584-b43c-c20634781b0a-cni-cfg\") pod \"kindnet-wjlh2\" (UID: \"388273e8-4ad1-4584-b43c-c20634781b0a\") " pod="kube-system/kindnet-wjlh2"
	Oct 10 18:19:39 old-k8s-version-141193 kubelet[1383]: I1010 18:19:39.407874    1383 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-n9klp" podStartSLOduration=1.40781684 podCreationTimestamp="2025-10-10 18:19:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:19:39.407409195 +0000 UTC m=+13.191799024" watchObservedRunningTime="2025-10-10 18:19:39.40781684 +0000 UTC m=+13.192206673"
	Oct 10 18:19:42 old-k8s-version-141193 kubelet[1383]: I1010 18:19:42.413186    1383 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-wjlh2" podStartSLOduration=1.619414671 podCreationTimestamp="2025-10-10 18:19:38 +0000 UTC" firstStartedPulling="2025-10-10 18:19:39.131240469 +0000 UTC m=+12.915630291" lastFinishedPulling="2025-10-10 18:19:41.924956899 +0000 UTC m=+15.709346718" observedRunningTime="2025-10-10 18:19:42.412820915 +0000 UTC m=+16.197210744" watchObservedRunningTime="2025-10-10 18:19:42.413131098 +0000 UTC m=+16.197520927"
	Oct 10 18:19:52 old-k8s-version-141193 kubelet[1383]: I1010 18:19:52.653541    1383 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 10 18:19:52 old-k8s-version-141193 kubelet[1383]: I1010 18:19:52.677221    1383 topology_manager.go:215] "Topology Admit Handler" podUID="d60fe80c-7b6d-46ae-bf0d-1bc8c178ebf3" podNamespace="kube-system" podName="coredns-5dd5756b68-qfwck"
	Oct 10 18:19:52 old-k8s-version-141193 kubelet[1383]: I1010 18:19:52.678356    1383 topology_manager.go:215] "Topology Admit Handler" podUID="ab2fa802-aedc-4f1c-ac3d-56e90d21c38b" podNamespace="kube-system" podName="storage-provisioner"
	Oct 10 18:19:52 old-k8s-version-141193 kubelet[1383]: I1010 18:19:52.713685    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbbkw\" (UniqueName: \"kubernetes.io/projected/d60fe80c-7b6d-46ae-bf0d-1bc8c178ebf3-kube-api-access-mbbkw\") pod \"coredns-5dd5756b68-qfwck\" (UID: \"d60fe80c-7b6d-46ae-bf0d-1bc8c178ebf3\") " pod="kube-system/coredns-5dd5756b68-qfwck"
	Oct 10 18:19:52 old-k8s-version-141193 kubelet[1383]: I1010 18:19:52.713766    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ab2fa802-aedc-4f1c-ac3d-56e90d21c38b-tmp\") pod \"storage-provisioner\" (UID: \"ab2fa802-aedc-4f1c-ac3d-56e90d21c38b\") " pod="kube-system/storage-provisioner"
	Oct 10 18:19:52 old-k8s-version-141193 kubelet[1383]: I1010 18:19:52.713867    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlj7z\" (UniqueName: \"kubernetes.io/projected/ab2fa802-aedc-4f1c-ac3d-56e90d21c38b-kube-api-access-vlj7z\") pod \"storage-provisioner\" (UID: \"ab2fa802-aedc-4f1c-ac3d-56e90d21c38b\") " pod="kube-system/storage-provisioner"
	Oct 10 18:19:52 old-k8s-version-141193 kubelet[1383]: I1010 18:19:52.713904    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d60fe80c-7b6d-46ae-bf0d-1bc8c178ebf3-config-volume\") pod \"coredns-5dd5756b68-qfwck\" (UID: \"d60fe80c-7b6d-46ae-bf0d-1bc8c178ebf3\") " pod="kube-system/coredns-5dd5756b68-qfwck"
	Oct 10 18:19:53 old-k8s-version-141193 kubelet[1383]: I1010 18:19:53.433670    1383 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.433617752 podCreationTimestamp="2025-10-10 18:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:19:53.433390261 +0000 UTC m=+27.217780088" watchObservedRunningTime="2025-10-10 18:19:53.433617752 +0000 UTC m=+27.218007605"
	Oct 10 18:19:53 old-k8s-version-141193 kubelet[1383]: I1010 18:19:53.443742    1383 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-qfwck" podStartSLOduration=14.443672108 podCreationTimestamp="2025-10-10 18:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:19:53.443345783 +0000 UTC m=+27.227735613" watchObservedRunningTime="2025-10-10 18:19:53.443672108 +0000 UTC m=+27.228061937"
	Oct 10 18:19:55 old-k8s-version-141193 kubelet[1383]: I1010 18:19:55.299905    1383 topology_manager.go:215] "Topology Admit Handler" podUID="a052a617-10eb-4b35-8da3-41ed530a6878" podNamespace="default" podName="busybox"
	Oct 10 18:19:55 old-k8s-version-141193 kubelet[1383]: I1010 18:19:55.328703    1383 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsbdm\" (UniqueName: \"kubernetes.io/projected/a052a617-10eb-4b35-8da3-41ed530a6878-kube-api-access-dsbdm\") pod \"busybox\" (UID: \"a052a617-10eb-4b35-8da3-41ed530a6878\") " pod="default/busybox"
	
	
	==> storage-provisioner [a1754a49e00e8c549151315d12a5b68ce780512c82f983cab7a018b1bb07f40a] <==
	I1010 18:19:53.053042       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1010 18:19:53.068326       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1010 18:19:53.068407       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1010 18:19:53.078617       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1010 18:19:53.078852       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-141193_34795716-9c39-42c9-82d7-caac7d51b37a!
	I1010 18:19:53.078961       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"783e5569-4ec9-4de4-9b38-064b377c9a54", APIVersion:"v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-141193_34795716-9c39-42c9-82d7-caac7d51b37a became leader
	I1010 18:19:53.179488       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-141193_34795716-9c39-42c9-82d7-caac7d51b37a!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-141193 -n old-k8s-version-141193
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-141193 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-472518 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-472518 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (251.743903ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:20:21Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-472518 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-472518 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-472518 describe deploy/metrics-server -n kube-system: exit status 1 (65.068446ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-472518 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-472518
helpers_test.go:243: (dbg) docker inspect embed-certs-472518:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2e7bf16e9ebb73fcfd92fb1e6d8f20354619815c38b49f886ca33e7e71b2139e",
	        "Created": "2025-10-10T18:19:36.31646399Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 298446,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-10T18:19:36.422300097Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:84da1fc78d37190122f56c520913b0bfc454516bc5fdbdc209e2a5258afce8c3",
	        "ResolvConfPath": "/var/lib/docker/containers/2e7bf16e9ebb73fcfd92fb1e6d8f20354619815c38b49f886ca33e7e71b2139e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2e7bf16e9ebb73fcfd92fb1e6d8f20354619815c38b49f886ca33e7e71b2139e/hostname",
	        "HostsPath": "/var/lib/docker/containers/2e7bf16e9ebb73fcfd92fb1e6d8f20354619815c38b49f886ca33e7e71b2139e/hosts",
	        "LogPath": "/var/lib/docker/containers/2e7bf16e9ebb73fcfd92fb1e6d8f20354619815c38b49f886ca33e7e71b2139e/2e7bf16e9ebb73fcfd92fb1e6d8f20354619815c38b49f886ca33e7e71b2139e-json.log",
	        "Name": "/embed-certs-472518",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-472518:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-472518",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2e7bf16e9ebb73fcfd92fb1e6d8f20354619815c38b49f886ca33e7e71b2139e",
	                "LowerDir": "/var/lib/docker/overlay2/5fa96fd4ec73d503d3d3528d8c7b13f7ca1b0a64ecf18291fa642aa2e0a2033a-init/diff:/var/lib/docker/overlay2/9995a0af7efc4d83e8e62526a6cf13ffc5df3bab5cee59077c863040f7e3e58d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5fa96fd4ec73d503d3d3528d8c7b13f7ca1b0a64ecf18291fa642aa2e0a2033a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5fa96fd4ec73d503d3d3528d8c7b13f7ca1b0a64ecf18291fa642aa2e0a2033a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5fa96fd4ec73d503d3d3528d8c7b13f7ca1b0a64ecf18291fa642aa2e0a2033a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-472518",
	                "Source": "/var/lib/docker/volumes/embed-certs-472518/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-472518",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-472518",
	                "name.minikube.sigs.k8s.io": "embed-certs-472518",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "29f7874edd098870f18c0adf7d7d3c2dbeac742fc1583dbb5947dca225d9793a",
	            "SandboxKey": "/var/run/docker/netns/29f7874edd09",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-472518": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:ab:2b:92:cc:45",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cbce2d732620a5010a9bb6fa38f48aa0b3fba945ed0c5927e2d54406158c8a77",
	                    "EndpointID": "ce1052f4eb42175a9ff70a93f6c4130091bbf1c799c8609b5fb8ac38e15c4d09",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-472518",
	                        "2e7bf16e9ebb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-472518 -n embed-certs-472518
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-472518 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-472518 logs -n 25: (1.148602014s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │      PROFILE       │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-078032 sudo ip a s                                                                                                             │ bridge-078032      │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo ip r s                                                                                                             │ bridge-078032      │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo iptables-save                                                                                                      │ bridge-078032      │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo iptables -t nat -L -n -v                                                                                           │ bridge-078032      │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo systemctl status kubelet --all --full --no-pager                                                                   │ bridge-078032      │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo systemctl cat kubelet --no-pager                                                                                   │ bridge-078032      │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo journalctl -xeu kubelet --all --full --no-pager                                                                    │ bridge-078032      │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo cat /etc/kubernetes/kubelet.conf                                                                                   │ bridge-078032      │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo cat /var/lib/kubelet/config.yaml                                                                                   │ bridge-078032      │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo systemctl status docker --all --full --no-pager                                                                    │ bridge-078032      │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ ssh     │ -p bridge-078032 sudo systemctl cat docker --no-pager                                                                                    │ bridge-078032      │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo cat /etc/docker/daemon.json                                                                                        │ bridge-078032      │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ ssh     │ -p bridge-078032 sudo docker system info                                                                                                 │ bridge-078032      │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ ssh     │ -p bridge-078032 sudo systemctl status cri-docker --all --full --no-pager                                                                │ bridge-078032      │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ ssh     │ -p bridge-078032 sudo systemctl cat cri-docker --no-pager                                                                                │ bridge-078032      │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                           │ bridge-078032      │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ ssh     │ -p bridge-078032 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                     │ bridge-078032      │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo cri-dockerd --version                                                                                              │ bridge-078032      │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo systemctl status containerd --all --full --no-pager                                                                │ bridge-078032      │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ ssh     │ -p bridge-078032 sudo systemctl cat containerd --no-pager                                                                                │ bridge-078032      │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo cat /lib/systemd/system/containerd.service                                                                         │ bridge-078032      │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo cat /etc/containerd/config.toml                                                                                    │ bridge-078032      │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ addons  │ enable metrics-server -p embed-certs-472518 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain │ embed-certs-472518 │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ ssh     │ -p bridge-078032 sudo containerd config dump                                                                                             │ bridge-078032      │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo systemctl status crio --all --full --no-pager                                                                      │ bridge-078032      │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/10 18:19:29
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 18:19:29.307380  297519 out.go:360] Setting OutFile to fd 1 ...
	I1010 18:19:29.307639  297519 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:19:29.307648  297519 out.go:374] Setting ErrFile to fd 2...
	I1010 18:19:29.307652  297519 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:19:29.307848  297519 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 18:19:29.308359  297519 out.go:368] Setting JSON to false
	I1010 18:19:29.309544  297519 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3709,"bootTime":1760116660,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 18:19:29.309628  297519 start.go:141] virtualization: kvm guest
	I1010 18:19:29.311539  297519 out.go:179] * [embed-certs-472518] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1010 18:19:29.312789  297519 out.go:179]   - MINIKUBE_LOCATION=21724
	I1010 18:19:29.312818  297519 notify.go:220] Checking for updates...
	I1010 18:19:29.315279  297519 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 18:19:29.316353  297519 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:19:29.317357  297519 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-5815/.minikube
	I1010 18:19:29.318296  297519 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 18:19:29.319275  297519 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 18:19:29.320678  297519 config.go:182] Loaded profile config "bridge-078032": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:19:29.320777  297519 config.go:182] Loaded profile config "no-preload-556024": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:19:29.320865  297519 config.go:182] Loaded profile config "old-k8s-version-141193": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1010 18:19:29.320961  297519 driver.go:421] Setting default libvirt URI to qemu:///system
	I1010 18:19:29.347326  297519 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1010 18:19:29.347447  297519 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:19:29.415299  297519 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:92 SystemTime:2025-10-10 18:19:29.403926888 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:19:29.415450  297519 docker.go:318] overlay module found
	I1010 18:19:29.417099  297519 out.go:179] * Using the docker driver based on user configuration
	I1010 18:19:29.418152  297519 start.go:305] selected driver: docker
	I1010 18:19:29.418168  297519 start.go:925] validating driver "docker" against <nil>
	I1010 18:19:29.418182  297519 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 18:19:29.418778  297519 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:19:29.480646  297519 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:92 SystemTime:2025-10-10 18:19:29.469973553 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:19:29.480836  297519 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1010 18:19:29.481123  297519 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:19:29.486487  297519 out.go:179] * Using Docker driver with root privileges
	I1010 18:19:29.487515  297519 cni.go:84] Creating CNI manager for ""
	I1010 18:19:29.487573  297519 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:19:29.487585  297519 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1010 18:19:29.487636  297519 start.go:349] cluster config:
	{Name:embed-certs-472518 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-472518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:19:29.488847  297519 out.go:179] * Starting "embed-certs-472518" primary control-plane node in "embed-certs-472518" cluster
	I1010 18:19:29.489954  297519 cache.go:123] Beginning downloading kic base image for docker with crio
	I1010 18:19:29.491026  297519 out.go:179] * Pulling base image v0.0.48-1760103811-21724 ...
	I1010 18:19:29.492039  297519 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:19:29.492088  297519 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon
	I1010 18:19:29.492107  297519 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1010 18:19:29.492115  297519 cache.go:58] Caching tarball of preloaded images
	I1010 18:19:29.492198  297519 preload.go:233] Found /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 18:19:29.492209  297519 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1010 18:19:29.492323  297519 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/config.json ...
	I1010 18:19:29.492345  297519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/config.json: {Name:mkf4940505c7ee133425c43eda360cf6e2c7ca37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:29.513009  297519 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon, skipping pull
	I1010 18:19:29.513028  297519 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 exists in daemon, skipping load
	I1010 18:19:29.513047  297519 cache.go:232] Successfully downloaded all kic artifacts
	I1010 18:19:29.513084  297519 start.go:360] acquireMachinesLock for embed-certs-472518: {Name:mk9cc494f12a6273567ade3e880d684508b52f40 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:19:29.513193  297519 start.go:364] duration metric: took 89.205µs to acquireMachinesLock for "embed-certs-472518"
	I1010 18:19:29.513217  297519 start.go:93] Provisioning new machine with config: &{Name:embed-certs-472518 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-472518 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:19:29.513310  297519 start.go:125] createHost starting for "" (driver="docker")
	I1010 18:19:26.202203  290755 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.408500221s)
	I1010 18:19:26.202229  290755 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1010 18:19:26.202248  290755 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1010 18:19:26.202265  290755 ssh_runner.go:235] Completed: which crictl: (1.408461538s)
	I1010 18:19:26.202302  290755 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1010 18:19:26.202319  290755 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:19:27.436400  290755 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.23407664s)
	I1010 18:19:27.436424  290755 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1010 18:19:27.436439  290755 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1010 18:19:27.436474  290755 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1010 18:19:27.436477  290755 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.234133965s)
	I1010 18:19:27.436545  290755 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:19:27.468040  290755 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:19:28.903594  290755 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.435515932s)
	I1010 18:19:28.903650  290755 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1010 18:19:28.903592  290755 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.467091411s)
	I1010 18:19:28.903722  290755 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1010 18:19:28.903743  290755 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1010 18:19:28.903758  290755 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1010 18:19:28.903790  290755 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1010 18:19:28.909081  290755 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1010 18:19:28.909119  290755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1010 18:19:30.434480  290755 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.530663151s)
	I1010 18:19:30.434509  290755 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1010 18:19:30.434544  290755 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1010 18:19:30.434604  290755 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1010 18:19:26.432444  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1010 18:19:27.227700  284725 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 18:19:27.227790  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:27.227806  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-141193 minikube.k8s.io/updated_at=2025_10_10T18_19_27_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ad692bf4ab89f0e135b80e730ae25010479ecc46 minikube.k8s.io/name=old-k8s-version-141193 minikube.k8s.io/primary=true
	I1010 18:19:27.241265  284725 ops.go:34] apiserver oom_adj: -16
	I1010 18:19:27.328182  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:27.829143  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:28.329274  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:28.829277  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:29.329094  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:29.828796  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:30.329234  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:30.829191  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:31.328940  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1010 18:19:29.390497  280647 pod_ready.go:104] pod "coredns-66bc5c9577-6pgp9" is not "Ready", error: <nil>
	W1010 18:19:31.891201  280647 pod_ready.go:104] pod "coredns-66bc5c9577-6pgp9" is not "Ready", error: <nil>
	I1010 18:19:29.515382  297519 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1010 18:19:29.515627  297519 start.go:159] libmachine.API.Create for "embed-certs-472518" (driver="docker")
	I1010 18:19:29.515666  297519 client.go:168] LocalClient.Create starting
	I1010 18:19:29.515737  297519 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem
	I1010 18:19:29.515777  297519 main.go:141] libmachine: Decoding PEM data...
	I1010 18:19:29.515803  297519 main.go:141] libmachine: Parsing certificate...
	I1010 18:19:29.515865  297519 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem
	I1010 18:19:29.515895  297519 main.go:141] libmachine: Decoding PEM data...
	I1010 18:19:29.515908  297519 main.go:141] libmachine: Parsing certificate...
	I1010 18:19:29.516365  297519 cli_runner.go:164] Run: docker network inspect embed-certs-472518 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1010 18:19:29.534588  297519 cli_runner.go:211] docker network inspect embed-certs-472518 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1010 18:19:29.534671  297519 network_create.go:284] running [docker network inspect embed-certs-472518] to gather additional debugging logs...
	I1010 18:19:29.534696  297519 cli_runner.go:164] Run: docker network inspect embed-certs-472518
	W1010 18:19:29.553547  297519 cli_runner.go:211] docker network inspect embed-certs-472518 returned with exit code 1
	I1010 18:19:29.553594  297519 network_create.go:287] error running [docker network inspect embed-certs-472518]: docker network inspect embed-certs-472518: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-472518 not found
	I1010 18:19:29.553614  297519 network_create.go:289] output of [docker network inspect embed-certs-472518]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-472518 not found
	
	** /stderr **
	I1010 18:19:29.553772  297519 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 18:19:29.572947  297519 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3f8fb0c8a54c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1a:51:a2:ab:ca:d6} reservation:<nil>}
	I1010 18:19:29.573938  297519 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bdbbffbd65c1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:de:11:33:77:48:20} reservation:<nil>}
	I1010 18:19:29.575016  297519 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0b6a5dab2001 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:93:a5:d3:c3:8f} reservation:<nil>}
	I1010 18:19:29.575896  297519 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-62177a68d9eb IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5e:70:f2:a2:da:00} reservation:<nil>}
	I1010 18:19:29.576771  297519 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-7dff4078001c IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:82:9e:3a:78:07:0b} reservation:<nil>}
	I1010 18:19:29.577819  297519 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fe1c30}
	I1010 18:19:29.577853  297519 network_create.go:124] attempt to create docker network embed-certs-472518 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1010 18:19:29.577909  297519 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-472518 embed-certs-472518
	I1010 18:19:29.638760  297519 network_create.go:108] docker network embed-certs-472518 192.168.94.0/24 created
	I1010 18:19:29.638795  297519 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-472518" container
	I1010 18:19:29.638864  297519 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1010 18:19:29.657821  297519 cli_runner.go:164] Run: docker volume create embed-certs-472518 --label name.minikube.sigs.k8s.io=embed-certs-472518 --label created_by.minikube.sigs.k8s.io=true
	I1010 18:19:29.680548  297519 oci.go:103] Successfully created a docker volume embed-certs-472518
	I1010 18:19:29.680634  297519 cli_runner.go:164] Run: docker run --rm --name embed-certs-472518-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-472518 --entrypoint /usr/bin/test -v embed-certs-472518:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 -d /var/lib
	I1010 18:19:30.653103  297519 oci.go:107] Successfully prepared a docker volume embed-certs-472518
	I1010 18:19:30.653168  297519 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:19:30.653194  297519 kic.go:194] Starting extracting preloaded images to volume ...
	I1010 18:19:30.653259  297519 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-472518:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1010 18:19:35.958583  290755 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (5.523955948s)
	I1010 18:19:35.958613  290755 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1010 18:19:35.958639  290755 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1010 18:19:35.958684  290755 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1010 18:19:31.829156  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:32.329031  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:32.829328  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:33.328835  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:33.829064  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:34.328594  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:34.828997  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:35.328857  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:35.829243  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:36.329252  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1010 18:19:33.922978  280647 pod_ready.go:104] pod "coredns-66bc5c9577-6pgp9" is not "Ready", error: <nil>
	W1010 18:19:36.389402  280647 pod_ready.go:104] pod "coredns-66bc5c9577-6pgp9" is not "Ready", error: <nil>
	I1010 18:19:36.829175  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:37.329152  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:37.828660  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:38.329035  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:38.829239  284725 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:38.910444  284725 kubeadm.go:1113] duration metric: took 11.682717077s to wait for elevateKubeSystemPrivileges
	I1010 18:19:38.910486  284725 kubeadm.go:402] duration metric: took 22.874508869s to StartCluster
	I1010 18:19:38.910508  284725 settings.go:142] acquiring lock: {Name:mk32701f7c6313a55b8740f0862889585a36e8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:38.910586  284725 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:19:38.911936  284725 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:38.912266  284725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1010 18:19:38.912275  284725 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:19:38.912349  284725 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 18:19:38.912448  284725 config.go:182] Loaded profile config "old-k8s-version-141193": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1010 18:19:38.912517  284725 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-141193"
	I1010 18:19:38.912540  284725 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-141193"
	I1010 18:19:38.912563  284725 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-141193"
	I1010 18:19:38.912587  284725 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-141193"
	I1010 18:19:38.912618  284725 host.go:66] Checking if "old-k8s-version-141193" exists ...
	I1010 18:19:38.912962  284725 cli_runner.go:164] Run: docker container inspect old-k8s-version-141193 --format={{.State.Status}}
	I1010 18:19:38.913195  284725 cli_runner.go:164] Run: docker container inspect old-k8s-version-141193 --format={{.State.Status}}
	I1010 18:19:38.914508  284725 out.go:179] * Verifying Kubernetes components...
	I1010 18:19:38.915692  284725 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:19:38.940475  284725 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-141193"
	I1010 18:19:38.940524  284725 host.go:66] Checking if "old-k8s-version-141193" exists ...
	I1010 18:19:38.940990  284725 cli_runner.go:164] Run: docker container inspect old-k8s-version-141193 --format={{.State.Status}}
	I1010 18:19:38.941661  284725 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:19:36.235326  297519 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-472518:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 -I lz4 -xf /preloaded.tar -C /extractDir: (5.582014005s)
	I1010 18:19:36.235353  297519 kic.go:203] duration metric: took 5.582156324s to extract preloaded images to volume ...
	W1010 18:19:36.235438  297519 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1010 18:19:36.235466  297519 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1010 18:19:36.235508  297519 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1010 18:19:36.298744  297519 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-472518 --name embed-certs-472518 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-472518 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-472518 --network embed-certs-472518 --ip 192.168.94.2 --volume embed-certs-472518:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6
	I1010 18:19:36.660277  297519 cli_runner.go:164] Run: docker container inspect embed-certs-472518 --format={{.State.Running}}
	I1010 18:19:36.683109  297519 cli_runner.go:164] Run: docker container inspect embed-certs-472518 --format={{.State.Status}}
	I1010 18:19:36.706289  297519 cli_runner.go:164] Run: docker exec embed-certs-472518 stat /var/lib/dpkg/alternatives/iptables
	I1010 18:19:36.758639  297519 oci.go:144] the created container "embed-certs-472518" has a running status.
	I1010 18:19:36.758670  297519 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa...
	I1010 18:19:36.927753  297519 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1010 18:19:36.958499  297519 cli_runner.go:164] Run: docker container inspect embed-certs-472518 --format={{.State.Status}}
	I1010 18:19:36.989724  297519 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1010 18:19:36.989767  297519 kic_runner.go:114] Args: [docker exec --privileged embed-certs-472518 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1010 18:19:37.085145  297519 cli_runner.go:164] Run: docker container inspect embed-certs-472518 --format={{.State.Status}}
	I1010 18:19:37.103765  297519 machine.go:93] provisionDockerMachine start ...
	I1010 18:19:37.103877  297519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:19:37.121859  297519 main.go:141] libmachine: Using SSH client type: native
	I1010 18:19:37.122156  297519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1010 18:19:37.122185  297519 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 18:19:37.278934  297519 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-472518
	
	I1010 18:19:37.278974  297519 ubuntu.go:182] provisioning hostname "embed-certs-472518"
	I1010 18:19:37.279036  297519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:19:37.301787  297519 main.go:141] libmachine: Using SSH client type: native
	I1010 18:19:37.302122  297519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1010 18:19:37.302147  297519 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-472518 && echo "embed-certs-472518" | sudo tee /etc/hostname
	I1010 18:19:37.462953  297519 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-472518
	
	I1010 18:19:37.463092  297519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:19:37.483355  297519 main.go:141] libmachine: Using SSH client type: native
	I1010 18:19:37.483562  297519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1010 18:19:37.483581  297519 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-472518' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-472518/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-472518' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:19:37.619633  297519 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:19:37.619663  297519 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-5815/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-5815/.minikube}
	I1010 18:19:37.619700  297519 ubuntu.go:190] setting up certificates
	I1010 18:19:37.619721  297519 provision.go:84] configureAuth start
	I1010 18:19:37.619782  297519 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-472518
	I1010 18:19:37.639524  297519 provision.go:143] copyHostCerts
	I1010 18:19:37.639581  297519 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem, removing ...
	I1010 18:19:37.639590  297519 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem
	I1010 18:19:37.639653  297519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem (1082 bytes)
	I1010 18:19:37.639753  297519 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem, removing ...
	I1010 18:19:37.639762  297519 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem
	I1010 18:19:37.639792  297519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem (1123 bytes)
	I1010 18:19:37.639892  297519 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem, removing ...
	I1010 18:19:37.639904  297519 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem
	I1010 18:19:37.639944  297519 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem (1675 bytes)
	I1010 18:19:37.640194  297519 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem org=jenkins.embed-certs-472518 san=[127.0.0.1 192.168.94.2 embed-certs-472518 localhost minikube]
	I1010 18:19:37.711804  297519 provision.go:177] copyRemoteCerts
	I1010 18:19:37.711857  297519 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:19:37.711895  297519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:19:37.732095  297519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:19:37.838992  297519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1010 18:19:37.866714  297519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1010 18:19:37.888945  297519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1010 18:19:37.918493  297519 provision.go:87] duration metric: took 298.757472ms to configureAuth
	I1010 18:19:37.918529  297519 ubuntu.go:206] setting minikube options for container-runtime
	I1010 18:19:37.918725  297519 config.go:182] Loaded profile config "embed-certs-472518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:19:37.918889  297519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:19:37.942356  297519 main.go:141] libmachine: Using SSH client type: native
	I1010 18:19:37.942602  297519 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1010 18:19:37.942622  297519 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:19:38.254626  297519 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:19:38.254651  297519 machine.go:96] duration metric: took 1.150858414s to provisionDockerMachine
	I1010 18:19:38.254663  297519 client.go:171] duration metric: took 8.738987356s to LocalClient.Create
	I1010 18:19:38.254682  297519 start.go:167] duration metric: took 8.739055799s to libmachine.API.Create "embed-certs-472518"
	I1010 18:19:38.254691  297519 start.go:293] postStartSetup for "embed-certs-472518" (driver="docker")
	I1010 18:19:38.254708  297519 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:19:38.254780  297519 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:19:38.254843  297519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:19:38.274793  297519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:19:38.380997  297519 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:19:38.385778  297519 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1010 18:19:38.385812  297519 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1010 18:19:38.385824  297519 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/addons for local assets ...
	I1010 18:19:38.385897  297519 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/files for local assets ...
	I1010 18:19:38.386015  297519 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem -> 93542.pem in /etc/ssl/certs
	I1010 18:19:38.386329  297519 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:19:38.399687  297519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:19:38.433205  297519 start.go:296] duration metric: took 178.496265ms for postStartSetup
	I1010 18:19:38.433649  297519 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-472518
	I1010 18:19:38.457310  297519 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/config.json ...
	I1010 18:19:38.457685  297519 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1010 18:19:38.457744  297519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:19:38.481945  297519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:19:38.584022  297519 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1010 18:19:38.589083  297519 start.go:128] duration metric: took 9.075756126s to createHost
	I1010 18:19:38.589110  297519 start.go:83] releasing machines lock for "embed-certs-472518", held for 9.075905248s
	I1010 18:19:38.589174  297519 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-472518
	I1010 18:19:38.608767  297519 ssh_runner.go:195] Run: cat /version.json
	I1010 18:19:38.608827  297519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:19:38.608846  297519 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:19:38.608919  297519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:19:38.632034  297519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:19:38.632792  297519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:19:38.800917  297519 ssh_runner.go:195] Run: systemctl --version
	I1010 18:19:38.808600  297519 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:19:38.855293  297519 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:19:38.861342  297519 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:19:38.861410  297519 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:19:38.891858  297519 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 18:19:38.891887  297519 start.go:495] detecting cgroup driver to use...
	I1010 18:19:38.891918  297519 detect.go:190] detected "systemd" cgroup driver on host os
	I1010 18:19:38.891971  297519 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:19:38.913275  297519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:19:38.933211  297519 docker.go:218] disabling cri-docker service (if available) ...
	I1010 18:19:38.933272  297519 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:19:38.961814  297519 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:19:38.989581  297519 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:19:39.113116  297519 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:19:39.242841  297519 docker.go:234] disabling docker service ...
	I1010 18:19:39.242909  297519 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:19:39.271389  297519 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:19:39.292339  297519 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:19:38.942888  284725 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:19:38.942907  284725 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 18:19:38.942960  284725 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-141193
	I1010 18:19:38.970721  284725 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/old-k8s-version-141193/id_rsa Username:docker}
	I1010 18:19:38.975861  284725 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 18:19:38.975942  284725 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 18:19:38.976121  284725 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-141193
	I1010 18:19:39.001036  284725 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/old-k8s-version-141193/id_rsa Username:docker}
	I1010 18:19:39.029164  284725 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1010 18:19:39.079267  284725 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:19:39.182181  284725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 18:19:39.196892  284725 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:19:39.325325  284725 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-141193" to be "Ready" ...
	I1010 18:19:39.325433  284725 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1010 18:19:39.645778  284725 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1010 18:19:39.426959  297519 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:19:39.529895  297519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:19:39.544847  297519 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:19:39.562962  297519 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1010 18:19:39.563028  297519 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:19:39.579129  297519 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1010 18:19:39.579188  297519 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:19:39.590948  297519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:19:39.603114  297519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:19:39.616530  297519 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:19:39.628641  297519 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:19:39.640699  297519 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:19:39.658670  297519 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:19:39.668732  297519 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:19:39.677245  297519 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:19:39.685122  297519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:19:39.768678  297519 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:19:40.416735  297519 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:19:40.416808  297519 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:19:40.420997  297519 start.go:563] Will wait 60s for crictl version
	I1010 18:19:40.421064  297519 ssh_runner.go:195] Run: which crictl
	I1010 18:19:40.424835  297519 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1010 18:19:40.451116  297519 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1010 18:19:40.451193  297519 ssh_runner.go:195] Run: crio --version
	I1010 18:19:40.478720  297519 ssh_runner.go:195] Run: crio --version
	I1010 18:19:40.508073  297519 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1010 18:19:36.573316  290755 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1010 18:19:36.573357  290755 cache_images.go:124] Successfully loaded all cached images
	I1010 18:19:36.573362  290755 cache_images.go:93] duration metric: took 14.187620191s to LoadCachedImages
	I1010 18:19:36.573372  290755 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1010 18:19:36.573462  290755 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-556024 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-556024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:19:36.573521  290755 ssh_runner.go:195] Run: crio config
	I1010 18:19:36.621515  290755 cni.go:84] Creating CNI manager for ""
	I1010 18:19:36.621547  290755 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:19:36.621568  290755 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1010 18:19:36.621599  290755 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-556024 NodeName:no-preload-556024 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 18:19:36.621768  290755 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-556024"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 18:19:36.621843  290755 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1010 18:19:36.631706  290755 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1010 18:19:36.631757  290755 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1010 18:19:36.641973  290755 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1010 18:19:36.642034  290755 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21724-5815/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1010 18:19:36.642086  290755 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21724-5815/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1010 18:19:36.642108  290755 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1010 18:19:36.646990  290755 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1010 18:19:36.647016  290755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1010 18:19:38.052385  290755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:19:38.067302  290755 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1010 18:19:38.071520  290755 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1010 18:19:38.071546  290755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1010 18:19:38.313164  290755 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1010 18:19:38.318677  290755 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1010 18:19:38.318705  290755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1010 18:19:38.507363  290755 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 18:19:38.517962  290755 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1010 18:19:38.533867  290755 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:19:38.552255  290755 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1010 18:19:38.567480  290755 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1010 18:19:38.571905  290755 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:19:38.583661  290755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:19:38.680114  290755 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:19:38.707986  290755 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024 for IP: 192.168.76.2
	I1010 18:19:38.708008  290755 certs.go:195] generating shared ca certs ...
	I1010 18:19:38.708035  290755 certs.go:227] acquiring lock for ca certs: {Name:mkd2ebf34e0d6ec3a7809bed8325fdc7fe2fcc31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:38.708231  290755 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key
	I1010 18:19:38.708290  290755 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key
	I1010 18:19:38.708300  290755 certs.go:257] generating profile certs ...
	I1010 18:19:38.708367  290755 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/client.key
	I1010 18:19:38.708380  290755 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/client.crt with IP's: []
	I1010 18:19:38.995610  290755 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/client.crt ...
	I1010 18:19:38.995641  290755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/client.crt: {Name:mk8d9b4af8bddce1ee92933f77d78e6f9633cf59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:38.995827  290755 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/client.key ...
	I1010 18:19:38.995849  290755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/client.key: {Name:mkc826ef11a17b59b6dfeb7d86cbbfc96e59b639 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:38.995960  290755 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/apiserver.key.b1bc56db
	I1010 18:19:38.995983  290755 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/apiserver.crt.b1bc56db with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1010 18:19:39.012404  290755 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/apiserver.crt.b1bc56db ...
	I1010 18:19:39.012435  290755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/apiserver.crt.b1bc56db: {Name:mk59e852199090b6eb5e2b3ca08754e93a3483bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:39.013257  290755 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/apiserver.key.b1bc56db ...
	I1010 18:19:39.013287  290755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/apiserver.key.b1bc56db: {Name:mk8d27a8b014996e0751bb5e6f7809aba94d859f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:39.013430  290755 certs.go:382] copying /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/apiserver.crt.b1bc56db -> /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/apiserver.crt
	I1010 18:19:39.013537  290755 certs.go:386] copying /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/apiserver.key.b1bc56db -> /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/apiserver.key
	I1010 18:19:39.013642  290755 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/proxy-client.key
	I1010 18:19:39.013671  290755 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/proxy-client.crt with IP's: []
	I1010 18:19:39.220834  290755 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/proxy-client.crt ...
	I1010 18:19:39.220862  290755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/proxy-client.crt: {Name:mk9f0d43bcac37a4c843d2cb582f0c2adfc93eae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:39.221038  290755 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/proxy-client.key ...
	I1010 18:19:39.221071  290755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/proxy-client.key: {Name:mkfb662165f70ef0a56cb9b08c738bf2739ae8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:39.221347  290755 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem (1338 bytes)
	W1010 18:19:39.221387  290755 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354_empty.pem, impossibly tiny 0 bytes
	I1010 18:19:39.221397  290755 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem (1675 bytes)
	I1010 18:19:39.221426  290755 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem (1082 bytes)
	I1010 18:19:39.221466  290755 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:19:39.221497  290755 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem (1675 bytes)
	I1010 18:19:39.221564  290755 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:19:39.223673  290755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:19:39.259242  290755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:19:39.289605  290755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:19:39.319469  290755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1010 18:19:39.359403  290755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1010 18:19:39.389160  290755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 18:19:39.421558  290755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:19:39.449548  290755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1010 18:19:39.481154  290755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /usr/share/ca-certificates/93542.pem (1708 bytes)
	I1010 18:19:39.504128  290755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:19:39.527111  290755 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem --> /usr/share/ca-certificates/9354.pem (1338 bytes)
	I1010 18:19:39.550221  290755 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 18:19:39.567974  290755 ssh_runner.go:195] Run: openssl version
	I1010 18:19:39.577149  290755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93542.pem && ln -fs /usr/share/ca-certificates/93542.pem /etc/ssl/certs/93542.pem"
	I1010 18:19:39.590131  290755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93542.pem
	I1010 18:19:39.596266  290755 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 17:36 /usr/share/ca-certificates/93542.pem
	I1010 18:19:39.596353  290755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93542.pem
	I1010 18:19:39.645929  290755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93542.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:19:39.656751  290755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:19:39.667911  290755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:19:39.672568  290755 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:19:39.672629  290755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:19:39.709916  290755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:19:39.723761  290755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9354.pem && ln -fs /usr/share/ca-certificates/9354.pem /etc/ssl/certs/9354.pem"
	I1010 18:19:39.734227  290755 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9354.pem
	I1010 18:19:39.738430  290755 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 17:36 /usr/share/ca-certificates/9354.pem
	I1010 18:19:39.738480  290755 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9354.pem
	I1010 18:19:39.774724  290755 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9354.pem /etc/ssl/certs/51391683.0"
	I1010 18:19:39.785967  290755 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:19:39.790333  290755 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1010 18:19:39.790380  290755 kubeadm.go:400] StartCluster: {Name:no-preload-556024 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-556024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:19:39.790446  290755 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 18:19:39.790501  290755 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 18:19:39.820090  290755 cri.go:89] found id: ""
	I1010 18:19:39.820169  290755 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 18:19:39.830518  290755 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 18:19:39.839615  290755 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1010 18:19:39.839662  290755 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 18:19:39.848419  290755 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 18:19:39.848434  290755 kubeadm.go:157] found existing configuration files:
	
	I1010 18:19:39.848467  290755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 18:19:39.857342  290755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 18:19:39.857392  290755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 18:19:39.866148  290755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 18:19:39.874901  290755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 18:19:39.874944  290755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 18:19:39.883262  290755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 18:19:39.893264  290755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 18:19:39.893314  290755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 18:19:39.901999  290755 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 18:19:39.910570  290755 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 18:19:39.910625  290755 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 18:19:39.919046  290755 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1010 18:19:39.976294  290755 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1010 18:19:40.036602  290755 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 18:19:39.646755  284725 addons.go:514] duration metric: took 734.392931ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1010 18:19:39.829842  284725 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-141193" context rescaled to 1 replicas
	W1010 18:19:41.329751  284725 node_ready.go:57] node "old-k8s-version-141193" has "Ready":"False" status (will retry)
	W1010 18:19:38.390151  280647 pod_ready.go:104] pod "coredns-66bc5c9577-6pgp9" is not "Ready", error: <nil>
	W1010 18:19:40.890568  280647 pod_ready.go:104] pod "coredns-66bc5c9577-6pgp9" is not "Ready", error: <nil>
	I1010 18:19:40.509087  297519 cli_runner.go:164] Run: docker network inspect embed-certs-472518 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 18:19:40.527846  297519 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1010 18:19:40.532038  297519 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:19:40.543233  297519 kubeadm.go:883] updating cluster {Name:embed-certs-472518 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-472518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 18:19:40.543355  297519 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:19:40.543406  297519 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:19:40.576079  297519 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:19:40.576103  297519 crio.go:433] Images already preloaded, skipping extraction
	I1010 18:19:40.576149  297519 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:19:40.603286  297519 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:19:40.603307  297519 cache_images.go:85] Images are preloaded, skipping loading
	I1010 18:19:40.603316  297519 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1010 18:19:40.603416  297519 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-472518 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-472518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:19:40.603489  297519 ssh_runner.go:195] Run: crio config
	I1010 18:19:40.650771  297519 cni.go:84] Creating CNI manager for ""
	I1010 18:19:40.650795  297519 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:19:40.650818  297519 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1010 18:19:40.650846  297519 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-472518 NodeName:embed-certs-472518 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 18:19:40.650994  297519 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-472518"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 18:19:40.651097  297519 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1010 18:19:40.660819  297519 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 18:19:40.660881  297519 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 18:19:40.669550  297519 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1010 18:19:40.685131  297519 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:19:40.700924  297519 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1010 18:19:40.714711  297519 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1010 18:19:40.718537  297519 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:19:40.730221  297519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:19:40.813343  297519 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:19:40.842358  297519 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518 for IP: 192.168.94.2
	I1010 18:19:40.842384  297519 certs.go:195] generating shared ca certs ...
	I1010 18:19:40.842414  297519 certs.go:227] acquiring lock for ca certs: {Name:mkd2ebf34e0d6ec3a7809bed8325fdc7fe2fcc31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:40.842575  297519 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key
	I1010 18:19:40.842641  297519 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key
	I1010 18:19:40.842652  297519 certs.go:257] generating profile certs ...
	I1010 18:19:40.842727  297519 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/client.key
	I1010 18:19:40.842755  297519 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/client.crt with IP's: []
	I1010 18:19:41.140872  297519 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/client.crt ...
	I1010 18:19:41.140951  297519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/client.crt: {Name:mk90ff9ee7c79c588a4bba8e2b2913e9b2856169 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:41.141174  297519 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/client.key ...
	I1010 18:19:41.141222  297519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/client.key: {Name:mk5f78037f64b29cdbc4aed24a925c0104c67521 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:41.141357  297519 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/apiserver.key.37abe28c
	I1010 18:19:41.141374  297519 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/apiserver.crt.37abe28c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1010 18:19:42.118734  297519 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/apiserver.crt.37abe28c ...
	I1010 18:19:42.118766  297519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/apiserver.crt.37abe28c: {Name:mk6adb94e12ef4d6ec0b143de7d4e7b3b5f49cfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:42.119006  297519 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/apiserver.key.37abe28c ...
	I1010 18:19:42.119029  297519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/apiserver.key.37abe28c: {Name:mk253a2f9e9b37a69fe2c704ee927519b2f475b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:42.119182  297519 certs.go:382] copying /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/apiserver.crt.37abe28c -> /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/apiserver.crt
	I1010 18:19:42.119295  297519 certs.go:386] copying /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/apiserver.key.37abe28c -> /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/apiserver.key
	I1010 18:19:42.119362  297519 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/proxy-client.key
	I1010 18:19:42.119379  297519 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/proxy-client.crt with IP's: []
	I1010 18:19:42.263357  297519 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/proxy-client.crt ...
	I1010 18:19:42.263382  297519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/proxy-client.crt: {Name:mk3ab2a5390977dc0ccd0e8ceb1ea219bfba11ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:42.263564  297519 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/proxy-client.key ...
	I1010 18:19:42.263581  297519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/proxy-client.key: {Name:mk52d2b0e3abb2f7b91c72f29faca7726f4e4d18 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:42.263785  297519 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem (1338 bytes)
	W1010 18:19:42.263821  297519 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354_empty.pem, impossibly tiny 0 bytes
	I1010 18:19:42.263832  297519 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem (1675 bytes)
	I1010 18:19:42.263852  297519 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem (1082 bytes)
	I1010 18:19:42.263877  297519 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:19:42.263899  297519 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem (1675 bytes)
	I1010 18:19:42.263936  297519 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:19:42.264523  297519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:19:42.285852  297519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:19:42.306370  297519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:19:42.328942  297519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1010 18:19:42.351952  297519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1010 18:19:42.372897  297519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1010 18:19:42.393127  297519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:19:42.415877  297519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1010 18:19:42.437547  297519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /usr/share/ca-certificates/93542.pem (1708 bytes)
	I1010 18:19:42.459534  297519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:19:42.478927  297519 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem --> /usr/share/ca-certificates/9354.pem (1338 bytes)
	I1010 18:19:42.497810  297519 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 18:19:42.511562  297519 ssh_runner.go:195] Run: openssl version
	I1010 18:19:42.517561  297519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93542.pem && ln -fs /usr/share/ca-certificates/93542.pem /etc/ssl/certs/93542.pem"
	I1010 18:19:42.526934  297519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93542.pem
	I1010 18:19:42.530997  297519 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 17:36 /usr/share/ca-certificates/93542.pem
	I1010 18:19:42.531059  297519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93542.pem
	I1010 18:19:42.565782  297519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93542.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:19:42.575313  297519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:19:42.585507  297519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:19:42.589373  297519 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:19:42.589430  297519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:19:42.633495  297519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:19:42.644465  297519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9354.pem && ln -fs /usr/share/ca-certificates/9354.pem /etc/ssl/certs/9354.pem"
	I1010 18:19:42.654370  297519 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9354.pem
	I1010 18:19:42.658277  297519 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 17:36 /usr/share/ca-certificates/9354.pem
	I1010 18:19:42.658319  297519 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9354.pem
	I1010 18:19:42.694959  297519 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9354.pem /etc/ssl/certs/51391683.0"
	I1010 18:19:42.705479  297519 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:19:42.709489  297519 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1010 18:19:42.709551  297519 kubeadm.go:400] StartCluster: {Name:embed-certs-472518 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-472518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:19:42.709628  297519 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 18:19:42.709665  297519 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 18:19:42.737960  297519 cri.go:89] found id: ""
	I1010 18:19:42.738023  297519 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 18:19:42.748401  297519 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 18:19:42.757553  297519 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1010 18:19:42.757594  297519 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 18:19:42.766074  297519 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 18:19:42.766092  297519 kubeadm.go:157] found existing configuration files:
	
	I1010 18:19:42.766131  297519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 18:19:42.774509  297519 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 18:19:42.774558  297519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 18:19:42.782612  297519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 18:19:42.790566  297519 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 18:19:42.790603  297519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 18:19:42.798312  297519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 18:19:42.806575  297519 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 18:19:42.806624  297519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 18:19:42.814470  297519 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 18:19:42.822367  297519 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 18:19:42.822416  297519 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 18:19:42.830820  297519 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1010 18:19:42.891919  297519 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1010 18:19:42.951751  297519 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1010 18:19:43.829592  284725 node_ready.go:57] node "old-k8s-version-141193" has "Ready":"False" status (will retry)
	W1010 18:19:46.328989  284725 node_ready.go:57] node "old-k8s-version-141193" has "Ready":"False" status (will retry)
	W1010 18:19:43.387922  280647 pod_ready.go:104] pod "coredns-66bc5c9577-6pgp9" is not "Ready", error: <nil>
	W1010 18:19:45.388011  280647 pod_ready.go:104] pod "coredns-66bc5c9577-6pgp9" is not "Ready", error: <nil>
	I1010 18:19:51.099014  290755 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1010 18:19:51.099141  290755 kubeadm.go:318] [preflight] Running pre-flight checks
	I1010 18:19:51.099232  290755 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1010 18:19:51.099328  290755 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1010 18:19:51.099389  290755 kubeadm.go:318] OS: Linux
	I1010 18:19:51.099459  290755 kubeadm.go:318] CGROUPS_CPU: enabled
	I1010 18:19:51.099534  290755 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1010 18:19:51.099604  290755 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1010 18:19:51.099686  290755 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1010 18:19:51.099811  290755 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1010 18:19:51.099896  290755 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1010 18:19:51.099963  290755 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1010 18:19:51.100023  290755 kubeadm.go:318] CGROUPS_IO: enabled
	I1010 18:19:51.100141  290755 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 18:19:51.100276  290755 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 18:19:51.100395  290755 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1010 18:19:51.100489  290755 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 18:19:51.101663  290755 out.go:252]   - Generating certificates and keys ...
	I1010 18:19:51.101768  290755 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1010 18:19:51.101889  290755 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1010 18:19:51.101990  290755 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1010 18:19:51.102102  290755 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1010 18:19:51.102192  290755 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1010 18:19:51.102270  290755 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1010 18:19:51.102353  290755 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1010 18:19:51.102539  290755 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-556024] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1010 18:19:51.102618  290755 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1010 18:19:51.102814  290755 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-556024] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1010 18:19:51.102922  290755 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1010 18:19:51.103010  290755 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1010 18:19:51.103097  290755 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1010 18:19:51.103180  290755 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 18:19:51.103260  290755 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 18:19:51.103353  290755 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1010 18:19:51.103444  290755 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 18:19:51.103516  290755 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 18:19:51.103597  290755 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 18:19:51.103725  290755 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 18:19:51.103826  290755 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 18:19:51.105120  290755 out.go:252]   - Booting up control plane ...
	I1010 18:19:51.105199  290755 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 18:19:51.105269  290755 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 18:19:51.105338  290755 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 18:19:51.105474  290755 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 18:19:51.105594  290755 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1010 18:19:51.105735  290755 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1010 18:19:51.105819  290755 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 18:19:51.105856  290755 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1010 18:19:51.105978  290755 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 18:19:51.106149  290755 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 18:19:51.106238  290755 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.765013ms
	I1010 18:19:51.106352  290755 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1010 18:19:51.106475  290755 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1010 18:19:51.106586  290755 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1010 18:19:51.106697  290755 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1010 18:19:51.106820  290755 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.505383881s
	I1010 18:19:51.106911  290755 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.020476677s
	I1010 18:19:51.107023  290755 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.501182758s
	I1010 18:19:51.107191  290755 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 18:19:51.107331  290755 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 18:19:51.107416  290755 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 18:19:51.107662  290755 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-556024 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 18:19:51.107740  290755 kubeadm.go:318] [bootstrap-token] Using token: 1dnpw3.2s7ope8v05qlu05n
	I1010 18:19:51.109746  290755 out.go:252]   - Configuring RBAC rules ...
	I1010 18:19:51.109856  290755 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 18:19:51.109967  290755 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 18:19:51.110120  290755 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 18:19:51.110278  290755 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 18:19:51.110442  290755 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 18:19:51.110571  290755 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 18:19:51.110748  290755 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 18:19:51.110818  290755 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1010 18:19:51.110863  290755 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1010 18:19:51.110869  290755 kubeadm.go:318] 
	I1010 18:19:51.110927  290755 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1010 18:19:51.110933  290755 kubeadm.go:318] 
	I1010 18:19:51.111004  290755 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1010 18:19:51.111010  290755 kubeadm.go:318] 
	I1010 18:19:51.111031  290755 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1010 18:19:51.111126  290755 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 18:19:51.111198  290755 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 18:19:51.111206  290755 kubeadm.go:318] 
	I1010 18:19:51.111268  290755 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1010 18:19:51.111277  290755 kubeadm.go:318] 
	I1010 18:19:51.111355  290755 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 18:19:51.111368  290755 kubeadm.go:318] 
	I1010 18:19:51.111439  290755 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1010 18:19:51.111550  290755 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 18:19:51.111651  290755 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 18:19:51.111661  290755 kubeadm.go:318] 
	I1010 18:19:51.111789  290755 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 18:19:51.111876  290755 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1010 18:19:51.111894  290755 kubeadm.go:318] 
	I1010 18:19:51.112017  290755 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 1dnpw3.2s7ope8v05qlu05n \
	I1010 18:19:51.112177  290755 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:08dcb68c3233bd2646103f50182dc3a0cc6156f6b69cb66c341f613324bcc71f \
	I1010 18:19:51.112215  290755 kubeadm.go:318] 	--control-plane 
	I1010 18:19:51.112221  290755 kubeadm.go:318] 
	I1010 18:19:51.112349  290755 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1010 18:19:51.112363  290755 kubeadm.go:318] 
	I1010 18:19:51.112501  290755 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 1dnpw3.2s7ope8v05qlu05n \
	I1010 18:19:51.112673  290755 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:08dcb68c3233bd2646103f50182dc3a0cc6156f6b69cb66c341f613324bcc71f 
	I1010 18:19:51.112700  290755 cni.go:84] Creating CNI manager for ""
	I1010 18:19:51.112707  290755 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:19:51.115481  290755 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1010 18:19:48.329324  284725 node_ready.go:57] node "old-k8s-version-141193" has "Ready":"False" status (will retry)
	W1010 18:19:50.330258  284725 node_ready.go:57] node "old-k8s-version-141193" has "Ready":"False" status (will retry)
	W1010 18:19:47.891033  280647 pod_ready.go:104] pod "coredns-66bc5c9577-6pgp9" is not "Ready", error: <nil>
	W1010 18:19:50.387313  280647 pod_ready.go:104] pod "coredns-66bc5c9577-6pgp9" is not "Ready", error: <nil>
	W1010 18:19:52.388193  280647 pod_ready.go:104] pod "coredns-66bc5c9577-6pgp9" is not "Ready", error: <nil>
	I1010 18:19:52.934726  297519 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1010 18:19:52.934821  297519 kubeadm.go:318] [preflight] Running pre-flight checks
	I1010 18:19:52.934904  297519 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1010 18:19:52.934950  297519 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1010 18:19:52.934980  297519 kubeadm.go:318] OS: Linux
	I1010 18:19:52.935019  297519 kubeadm.go:318] CGROUPS_CPU: enabled
	I1010 18:19:52.935150  297519 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1010 18:19:52.935227  297519 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1010 18:19:52.935317  297519 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1010 18:19:52.935407  297519 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1010 18:19:52.935503  297519 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1010 18:19:52.935570  297519 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1010 18:19:52.935614  297519 kubeadm.go:318] CGROUPS_IO: enabled
	I1010 18:19:52.935678  297519 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 18:19:52.935810  297519 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 18:19:52.935922  297519 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1010 18:19:52.935995  297519 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 18:19:52.937599  297519 out.go:252]   - Generating certificates and keys ...
	I1010 18:19:52.937702  297519 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1010 18:19:52.937764  297519 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1010 18:19:52.937837  297519 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1010 18:19:52.937913  297519 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1010 18:19:52.938002  297519 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1010 18:19:52.938087  297519 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1010 18:19:52.938167  297519 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1010 18:19:52.938292  297519 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-472518 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1010 18:19:52.938357  297519 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1010 18:19:52.938462  297519 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-472518 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1010 18:19:52.938520  297519 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1010 18:19:52.938577  297519 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1010 18:19:52.938619  297519 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1010 18:19:52.938664  297519 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 18:19:52.938707  297519 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 18:19:52.938757  297519 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1010 18:19:52.938811  297519 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 18:19:52.938889  297519 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 18:19:52.938967  297519 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 18:19:52.939039  297519 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 18:19:52.939133  297519 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 18:19:52.940424  297519 out.go:252]   - Booting up control plane ...
	I1010 18:19:52.940499  297519 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 18:19:52.940563  297519 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 18:19:52.940634  297519 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 18:19:52.940746  297519 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 18:19:52.940821  297519 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1010 18:19:52.940920  297519 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1010 18:19:52.940991  297519 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 18:19:52.941029  297519 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1010 18:19:52.941176  297519 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 18:19:52.941295  297519 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 18:19:52.941352  297519 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001306157s
	I1010 18:19:52.941439  297519 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1010 18:19:52.941512  297519 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1010 18:19:52.941609  297519 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1010 18:19:52.941705  297519 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1010 18:19:52.941826  297519 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.129689745s
	I1010 18:19:52.941895  297519 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.412778766s
	I1010 18:19:52.941976  297519 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.002254768s
	I1010 18:19:52.942146  297519 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 18:19:52.942256  297519 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 18:19:52.942328  297519 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 18:19:52.942531  297519 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-472518 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 18:19:52.942586  297519 kubeadm.go:318] [bootstrap-token] Using token: wv6fn7.57zl6x7bcm0holor
	I1010 18:19:52.943725  297519 out.go:252]   - Configuring RBAC rules ...
	I1010 18:19:52.943845  297519 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 18:19:52.943918  297519 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 18:19:52.944036  297519 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 18:19:52.944194  297519 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 18:19:52.944369  297519 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 18:19:52.944484  297519 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 18:19:52.944615  297519 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 18:19:52.944686  297519 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1010 18:19:52.944763  297519 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1010 18:19:52.944773  297519 kubeadm.go:318] 
	I1010 18:19:52.944857  297519 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1010 18:19:52.944866  297519 kubeadm.go:318] 
	I1010 18:19:52.944983  297519 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1010 18:19:52.944998  297519 kubeadm.go:318] 
	I1010 18:19:52.945044  297519 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1010 18:19:52.945129  297519 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 18:19:52.945180  297519 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 18:19:52.945187  297519 kubeadm.go:318] 
	I1010 18:19:52.945261  297519 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1010 18:19:52.945271  297519 kubeadm.go:318] 
	I1010 18:19:52.945338  297519 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 18:19:52.945347  297519 kubeadm.go:318] 
	I1010 18:19:52.945409  297519 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1010 18:19:52.945513  297519 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 18:19:52.945576  297519 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 18:19:52.945582  297519 kubeadm.go:318] 
	I1010 18:19:52.945658  297519 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 18:19:52.945724  297519 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1010 18:19:52.945729  297519 kubeadm.go:318] 
	I1010 18:19:52.945810  297519 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token wv6fn7.57zl6x7bcm0holor \
	I1010 18:19:52.945904  297519 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:08dcb68c3233bd2646103f50182dc3a0cc6156f6b69cb66c341f613324bcc71f \
	I1010 18:19:52.945950  297519 kubeadm.go:318] 	--control-plane 
	I1010 18:19:52.945959  297519 kubeadm.go:318] 
	I1010 18:19:52.946083  297519 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1010 18:19:52.946097  297519 kubeadm.go:318] 
	I1010 18:19:52.946226  297519 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token wv6fn7.57zl6x7bcm0holor \
	I1010 18:19:52.946383  297519 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:08dcb68c3233bd2646103f50182dc3a0cc6156f6b69cb66c341f613324bcc71f 
	I1010 18:19:52.946399  297519 cni.go:84] Creating CNI manager for ""
	I1010 18:19:52.946407  297519 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:19:52.947577  297519 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1010 18:19:52.948497  297519 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1010 18:19:52.953078  297519 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1010 18:19:52.953098  297519 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1010 18:19:52.968787  297519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1010 18:19:53.257505  297519 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 18:19:53.257532  297519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:53.257570  297519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-472518 minikube.k8s.io/updated_at=2025_10_10T18_19_53_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ad692bf4ab89f0e135b80e730ae25010479ecc46 minikube.k8s.io/name=embed-certs-472518 minikube.k8s.io/primary=true
	I1010 18:19:53.347324  297519 ops.go:34] apiserver oom_adj: -16
	I1010 18:19:53.347341  297519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:53.848262  297519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:52.828952  284725 node_ready.go:49] node "old-k8s-version-141193" is "Ready"
	I1010 18:19:52.828982  284725 node_ready.go:38] duration metric: took 13.503624439s for node "old-k8s-version-141193" to be "Ready" ...
	I1010 18:19:52.829002  284725 api_server.go:52] waiting for apiserver process to appear ...
	I1010 18:19:52.829112  284725 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 18:19:52.842654  284725 api_server.go:72] duration metric: took 13.930344671s to wait for apiserver process to appear ...
	I1010 18:19:52.842683  284725 api_server.go:88] waiting for apiserver healthz status ...
	I1010 18:19:52.842709  284725 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1010 18:19:52.846757  284725 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1010 18:19:52.847971  284725 api_server.go:141] control plane version: v1.28.0
	I1010 18:19:52.847993  284725 api_server.go:131] duration metric: took 5.303517ms to wait for apiserver health ...
	I1010 18:19:52.848004  284725 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 18:19:52.851624  284725 system_pods.go:59] 8 kube-system pods found
	I1010 18:19:52.851650  284725 system_pods.go:61] "coredns-5dd5756b68-qfwck" [d60fe80c-7b6d-46ae-bf0d-1bc8c178ebf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:19:52.851655  284725 system_pods.go:61] "etcd-old-k8s-version-141193" [624eb63b-ba8a-43ad-835d-c604e5375d5b] Running
	I1010 18:19:52.851661  284725 system_pods.go:61] "kindnet-wjlh2" [388273e8-4ad1-4584-b43c-c20634781b0a] Running
	I1010 18:19:52.851666  284725 system_pods.go:61] "kube-apiserver-old-k8s-version-141193" [cae658f3-06d6-498f-a653-5e6f227189ec] Running
	I1010 18:19:52.851672  284725 system_pods.go:61] "kube-controller-manager-old-k8s-version-141193" [9eac32cb-f48f-43c2-afcb-e3fc2a074abf] Running
	I1010 18:19:52.851676  284725 system_pods.go:61] "kube-proxy-n9klp" [7f16dbb3-cc34-448d-91ba-fdeb22a8c5e1] Running
	I1010 18:19:52.851679  284725 system_pods.go:61] "kube-scheduler-old-k8s-version-141193" [54df6abe-d778-4d3c-a74d-bdb5c192042d] Running
	I1010 18:19:52.851684  284725 system_pods.go:61] "storage-provisioner" [ab2fa802-aedc-4f1c-ac3d-56e90d21c38b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 18:19:52.851693  284725 system_pods.go:74] duration metric: took 3.683568ms to wait for pod list to return data ...
	I1010 18:19:52.851699  284725 default_sa.go:34] waiting for default service account to be created ...
	I1010 18:19:52.853935  284725 default_sa.go:45] found service account: "default"
	I1010 18:19:52.853956  284725 default_sa.go:55] duration metric: took 2.249964ms for default service account to be created ...
	I1010 18:19:52.853966  284725 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 18:19:52.857855  284725 system_pods.go:86] 8 kube-system pods found
	I1010 18:19:52.857882  284725 system_pods.go:89] "coredns-5dd5756b68-qfwck" [d60fe80c-7b6d-46ae-bf0d-1bc8c178ebf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:19:52.857887  284725 system_pods.go:89] "etcd-old-k8s-version-141193" [624eb63b-ba8a-43ad-835d-c604e5375d5b] Running
	I1010 18:19:52.857893  284725 system_pods.go:89] "kindnet-wjlh2" [388273e8-4ad1-4584-b43c-c20634781b0a] Running
	I1010 18:19:52.857898  284725 system_pods.go:89] "kube-apiserver-old-k8s-version-141193" [cae658f3-06d6-498f-a653-5e6f227189ec] Running
	I1010 18:19:52.857902  284725 system_pods.go:89] "kube-controller-manager-old-k8s-version-141193" [9eac32cb-f48f-43c2-afcb-e3fc2a074abf] Running
	I1010 18:19:52.857905  284725 system_pods.go:89] "kube-proxy-n9klp" [7f16dbb3-cc34-448d-91ba-fdeb22a8c5e1] Running
	I1010 18:19:52.857908  284725 system_pods.go:89] "kube-scheduler-old-k8s-version-141193" [54df6abe-d778-4d3c-a74d-bdb5c192042d] Running
	I1010 18:19:52.857912  284725 system_pods.go:89] "storage-provisioner" [ab2fa802-aedc-4f1c-ac3d-56e90d21c38b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 18:19:52.857949  284725 retry.go:31] will retry after 256.84422ms: missing components: kube-dns
	I1010 18:19:53.119825  284725 system_pods.go:86] 8 kube-system pods found
	I1010 18:19:53.119854  284725 system_pods.go:89] "coredns-5dd5756b68-qfwck" [d60fe80c-7b6d-46ae-bf0d-1bc8c178ebf3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:19:53.119865  284725 system_pods.go:89] "etcd-old-k8s-version-141193" [624eb63b-ba8a-43ad-835d-c604e5375d5b] Running
	I1010 18:19:53.119871  284725 system_pods.go:89] "kindnet-wjlh2" [388273e8-4ad1-4584-b43c-c20634781b0a] Running
	I1010 18:19:53.119875  284725 system_pods.go:89] "kube-apiserver-old-k8s-version-141193" [cae658f3-06d6-498f-a653-5e6f227189ec] Running
	I1010 18:19:53.119879  284725 system_pods.go:89] "kube-controller-manager-old-k8s-version-141193" [9eac32cb-f48f-43c2-afcb-e3fc2a074abf] Running
	I1010 18:19:53.119883  284725 system_pods.go:89] "kube-proxy-n9klp" [7f16dbb3-cc34-448d-91ba-fdeb22a8c5e1] Running
	I1010 18:19:53.119886  284725 system_pods.go:89] "kube-scheduler-old-k8s-version-141193" [54df6abe-d778-4d3c-a74d-bdb5c192042d] Running
	I1010 18:19:53.119892  284725 system_pods.go:89] "storage-provisioner" [ab2fa802-aedc-4f1c-ac3d-56e90d21c38b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 18:19:53.119909  284725 retry.go:31] will retry after 347.42707ms: missing components: kube-dns
	I1010 18:19:53.472355  284725 system_pods.go:86] 8 kube-system pods found
	I1010 18:19:53.472387  284725 system_pods.go:89] "coredns-5dd5756b68-qfwck" [d60fe80c-7b6d-46ae-bf0d-1bc8c178ebf3] Running
	I1010 18:19:53.472395  284725 system_pods.go:89] "etcd-old-k8s-version-141193" [624eb63b-ba8a-43ad-835d-c604e5375d5b] Running
	I1010 18:19:53.472400  284725 system_pods.go:89] "kindnet-wjlh2" [388273e8-4ad1-4584-b43c-c20634781b0a] Running
	I1010 18:19:53.472406  284725 system_pods.go:89] "kube-apiserver-old-k8s-version-141193" [cae658f3-06d6-498f-a653-5e6f227189ec] Running
	I1010 18:19:53.472419  284725 system_pods.go:89] "kube-controller-manager-old-k8s-version-141193" [9eac32cb-f48f-43c2-afcb-e3fc2a074abf] Running
	I1010 18:19:53.472422  284725 system_pods.go:89] "kube-proxy-n9klp" [7f16dbb3-cc34-448d-91ba-fdeb22a8c5e1] Running
	I1010 18:19:53.472427  284725 system_pods.go:89] "kube-scheduler-old-k8s-version-141193" [54df6abe-d778-4d3c-a74d-bdb5c192042d] Running
	I1010 18:19:53.472432  284725 system_pods.go:89] "storage-provisioner" [ab2fa802-aedc-4f1c-ac3d-56e90d21c38b] Running
	I1010 18:19:53.472442  284725 system_pods.go:126] duration metric: took 618.469286ms to wait for k8s-apps to be running ...
	I1010 18:19:53.472457  284725 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 18:19:53.472508  284725 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:19:53.486830  284725 system_svc.go:56] duration metric: took 14.364673ms WaitForService to wait for kubelet
	I1010 18:19:53.486860  284725 kubeadm.go:586] duration metric: took 14.574554478s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:19:53.486877  284725 node_conditions.go:102] verifying NodePressure condition ...
	I1010 18:19:53.489587  284725 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1010 18:19:53.489610  284725 node_conditions.go:123] node cpu capacity is 8
	I1010 18:19:53.489630  284725 node_conditions.go:105] duration metric: took 2.747794ms to run NodePressure ...
	I1010 18:19:53.489645  284725 start.go:241] waiting for startup goroutines ...
	I1010 18:19:53.489657  284725 start.go:246] waiting for cluster config update ...
	I1010 18:19:53.489684  284725 start.go:255] writing updated cluster config ...
	I1010 18:19:53.489914  284725 ssh_runner.go:195] Run: rm -f paused
	I1010 18:19:53.493783  284725 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 18:19:53.497968  284725 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-qfwck" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:53.502536  284725 pod_ready.go:94] pod "coredns-5dd5756b68-qfwck" is "Ready"
	I1010 18:19:53.502555  284725 pod_ready.go:86] duration metric: took 4.567345ms for pod "coredns-5dd5756b68-qfwck" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:53.505810  284725 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:53.510096  284725 pod_ready.go:94] pod "etcd-old-k8s-version-141193" is "Ready"
	I1010 18:19:53.510121  284725 pod_ready.go:86] duration metric: took 4.29166ms for pod "etcd-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:53.512958  284725 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:53.518200  284725 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-141193" is "Ready"
	I1010 18:19:53.518225  284725 pod_ready.go:86] duration metric: took 5.242025ms for pod "kube-apiserver-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:53.521730  284725 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:53.897736  284725 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-141193" is "Ready"
	I1010 18:19:53.897762  284725 pod_ready.go:86] duration metric: took 376.012519ms for pod "kube-controller-manager-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:54.098310  284725 pod_ready.go:83] waiting for pod "kube-proxy-n9klp" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:54.498776  284725 pod_ready.go:94] pod "kube-proxy-n9klp" is "Ready"
	I1010 18:19:54.498801  284725 pod_ready.go:86] duration metric: took 400.468138ms for pod "kube-proxy-n9klp" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:54.698416  284725 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:55.099363  284725 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-141193" is "Ready"
	I1010 18:19:55.099406  284725 pod_ready.go:86] duration metric: took 400.966691ms for pod "kube-scheduler-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:55.099420  284725 pod_ready.go:40] duration metric: took 1.605614614s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 18:19:55.146513  284725 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1010 18:19:55.148157  284725 out.go:203] 
	W1010 18:19:55.149271  284725 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1010 18:19:55.150341  284725 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1010 18:19:55.151783  284725 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-141193" cluster and "default" namespace by default
	I1010 18:19:51.116921  290755 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1010 18:19:51.123354  290755 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1010 18:19:51.123376  290755 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1010 18:19:51.142784  290755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1010 18:19:51.411166  290755 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 18:19:51.411272  290755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:51.411282  290755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-556024 minikube.k8s.io/updated_at=2025_10_10T18_19_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ad692bf4ab89f0e135b80e730ae25010479ecc46 minikube.k8s.io/name=no-preload-556024 minikube.k8s.io/primary=true
	I1010 18:19:51.425411  290755 ops.go:34] apiserver oom_adj: -16
	I1010 18:19:51.502620  290755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:52.003160  290755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:52.502895  290755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:53.003785  290755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:53.503248  290755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:54.003396  290755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:54.503700  290755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:55.002752  290755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:55.503206  290755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:56.003274  290755 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:56.077623  290755 kubeadm.go:1113] duration metric: took 4.666423167s to wait for elevateKubeSystemPrivileges
	I1010 18:19:56.077655  290755 kubeadm.go:402] duration metric: took 16.287277857s to StartCluster
	I1010 18:19:56.077673  290755 settings.go:142] acquiring lock: {Name:mk32701f7c6313a55b8740f0862889585a36e8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:56.077767  290755 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:19:56.079085  290755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:56.079348  290755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1010 18:19:56.079361  290755 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:19:56.079435  290755 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 18:19:56.079520  290755 addons.go:69] Setting storage-provisioner=true in profile "no-preload-556024"
	I1010 18:19:56.079526  290755 config.go:182] Loaded profile config "no-preload-556024": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:19:56.079549  290755 addons.go:69] Setting default-storageclass=true in profile "no-preload-556024"
	I1010 18:19:56.079558  290755 addons.go:238] Setting addon storage-provisioner=true in "no-preload-556024"
	I1010 18:19:56.079573  290755 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-556024"
	I1010 18:19:56.079594  290755 host.go:66] Checking if "no-preload-556024" exists ...
	I1010 18:19:56.079913  290755 cli_runner.go:164] Run: docker container inspect no-preload-556024 --format={{.State.Status}}
	I1010 18:19:56.080093  290755 cli_runner.go:164] Run: docker container inspect no-preload-556024 --format={{.State.Status}}
	I1010 18:19:56.081817  290755 out.go:179] * Verifying Kubernetes components...
	I1010 18:19:56.082925  290755 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:19:56.103942  290755 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:19:56.105022  290755 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:19:56.105043  290755 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 18:19:56.105124  290755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:19:56.105296  290755 addons.go:238] Setting addon default-storageclass=true in "no-preload-556024"
	I1010 18:19:56.105339  290755 host.go:66] Checking if "no-preload-556024" exists ...
	I1010 18:19:56.105808  290755 cli_runner.go:164] Run: docker container inspect no-preload-556024 --format={{.State.Status}}
	I1010 18:19:54.888315  280647 pod_ready.go:94] pod "coredns-66bc5c9577-6pgp9" is "Ready"
	I1010 18:19:54.888342  280647 pod_ready.go:86] duration metric: took 32.506196324s for pod "coredns-66bc5c9577-6pgp9" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:54.888354  280647 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hwdcx" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:54.890145  280647 pod_ready.go:99] pod "coredns-66bc5c9577-hwdcx" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-hwdcx" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-hwdcx" not found
	I1010 18:19:54.890165  280647 pod_ready.go:86] duration metric: took 1.796566ms for pod "coredns-66bc5c9577-hwdcx" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:54.892554  280647 pod_ready.go:83] waiting for pod "etcd-bridge-078032" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:54.896572  280647 pod_ready.go:94] pod "etcd-bridge-078032" is "Ready"
	I1010 18:19:54.896593  280647 pod_ready.go:86] duration metric: took 4.023309ms for pod "etcd-bridge-078032" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:54.898779  280647 pod_ready.go:83] waiting for pod "kube-apiserver-bridge-078032" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:54.902533  280647 pod_ready.go:94] pod "kube-apiserver-bridge-078032" is "Ready"
	I1010 18:19:54.902552  280647 pod_ready.go:86] duration metric: took 3.751818ms for pod "kube-apiserver-bridge-078032" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:54.904462  280647 pod_ready.go:83] waiting for pod "kube-controller-manager-bridge-078032" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:55.290421  280647 pod_ready.go:94] pod "kube-controller-manager-bridge-078032" is "Ready"
	I1010 18:19:55.290453  280647 pod_ready.go:86] duration metric: took 385.959733ms for pod "kube-controller-manager-bridge-078032" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:55.486162  280647 pod_ready.go:83] waiting for pod "kube-proxy-87h4s" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:55.887342  280647 pod_ready.go:94] pod "kube-proxy-87h4s" is "Ready"
	I1010 18:19:55.887365  280647 pod_ready.go:86] duration metric: took 401.175115ms for pod "kube-proxy-87h4s" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:56.087950  280647 pod_ready.go:83] waiting for pod "kube-scheduler-bridge-078032" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:56.486783  280647 pod_ready.go:94] pod "kube-scheduler-bridge-078032" is "Ready"
	I1010 18:19:56.486816  280647 pod_ready.go:86] duration metric: took 398.835264ms for pod "kube-scheduler-bridge-078032" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:19:56.486831  280647 pod_ready.go:40] duration metric: took 34.109526609s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 18:19:56.540430  280647 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1010 18:19:56.541841  280647 out.go:179] * Done! kubectl is now configured to use "bridge-078032" cluster and "default" namespace by default
	W1010 18:19:56.548406  280647 root.go:91] failed to log command end to audit: failed to find a log row with id equals to faab746a-85bd-4429-8bb4-e2a5039cb262
	I1010 18:19:56.138224  290755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/no-preload-556024/id_rsa Username:docker}
	I1010 18:19:56.140441  290755 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 18:19:56.140465  290755 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 18:19:56.140522  290755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:19:56.167454  290755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/no-preload-556024/id_rsa Username:docker}
	I1010 18:19:56.183238  290755 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1010 18:19:56.227730  290755 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:19:56.329192  290755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 18:19:56.356571  290755 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:19:56.360115  290755 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1010 18:19:56.364276  290755 node_ready.go:35] waiting up to 6m0s for node "no-preload-556024" to be "Ready" ...
	I1010 18:19:56.686727  290755 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1010 18:19:54.348043  297519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:54.848239  297519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:55.347641  297519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:55.848180  297519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:56.348383  297519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:56.847639  297519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:57.348010  297519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:57.848274  297519 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:19:57.971252  297519 kubeadm.go:1113] duration metric: took 4.713784514s to wait for elevateKubeSystemPrivileges
	I1010 18:19:57.971316  297519 kubeadm.go:402] duration metric: took 15.261769903s to StartCluster
	I1010 18:19:57.971338  297519 settings.go:142] acquiring lock: {Name:mk32701f7c6313a55b8740f0862889585a36e8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:57.971441  297519 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:19:57.975955  297519 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:19:57.976886  297519 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1010 18:19:57.977140  297519 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:19:57.977243  297519 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 18:19:57.977828  297519 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-472518"
	I1010 18:19:57.977853  297519 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-472518"
	I1010 18:19:57.977880  297519 host.go:66] Checking if "embed-certs-472518" exists ...
	I1010 18:19:57.977571  297519 config.go:182] Loaded profile config "embed-certs-472518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:19:57.978298  297519 addons.go:69] Setting default-storageclass=true in profile "embed-certs-472518"
	I1010 18:19:57.978314  297519 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-472518"
	I1010 18:19:57.978667  297519 cli_runner.go:164] Run: docker container inspect embed-certs-472518 --format={{.State.Status}}
	I1010 18:19:57.979381  297519 cli_runner.go:164] Run: docker container inspect embed-certs-472518 --format={{.State.Status}}
	I1010 18:19:57.980035  297519 out.go:179] * Verifying Kubernetes components...
	I1010 18:19:57.981300  297519 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:19:58.019774  297519 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:19:58.021708  297519 addons.go:238] Setting addon default-storageclass=true in "embed-certs-472518"
	I1010 18:19:58.021757  297519 host.go:66] Checking if "embed-certs-472518" exists ...
	I1010 18:19:58.022356  297519 cli_runner.go:164] Run: docker container inspect embed-certs-472518 --format={{.State.Status}}
	I1010 18:19:58.025890  297519 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:19:58.025919  297519 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 18:19:58.025975  297519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:19:58.055619  297519 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 18:19:58.055648  297519 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 18:19:58.055731  297519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:19:58.065831  297519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:19:58.090621  297519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:19:58.199090  297519 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1010 18:19:58.236956  297519 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:19:58.376816  297519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 18:19:58.414355  297519 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:19:58.569341  297519 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1010 18:19:58.570333  297519 node_ready.go:35] waiting up to 6m0s for node "embed-certs-472518" to be "Ready" ...
	I1010 18:19:58.861536  297519 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1010 18:19:58.862652  297519 addons.go:514] duration metric: took 885.405072ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1010 18:19:59.077258  297519 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-472518" context rescaled to 1 replicas
	I1010 18:19:56.690085  290755 addons.go:514] duration metric: took 610.6488ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1010 18:19:56.865441  290755 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-556024" context rescaled to 1 replicas
	W1010 18:19:58.375084  290755 node_ready.go:57] node "no-preload-556024" has "Ready":"False" status (will retry)
	W1010 18:20:00.867226  290755 node_ready.go:57] node "no-preload-556024" has "Ready":"False" status (will retry)
	W1010 18:20:00.573915  297519 node_ready.go:57] node "embed-certs-472518" has "Ready":"False" status (will retry)
	W1010 18:20:03.074483  297519 node_ready.go:57] node "embed-certs-472518" has "Ready":"False" status (will retry)
	W1010 18:20:02.867471  290755 node_ready.go:57] node "no-preload-556024" has "Ready":"False" status (will retry)
	W1010 18:20:04.868110  290755 node_ready.go:57] node "no-preload-556024" has "Ready":"False" status (will retry)
	W1010 18:20:05.573831  297519 node_ready.go:57] node "embed-certs-472518" has "Ready":"False" status (will retry)
	W1010 18:20:07.573878  297519 node_ready.go:57] node "embed-certs-472518" has "Ready":"False" status (will retry)
	W1010 18:20:07.368104  290755 node_ready.go:57] node "no-preload-556024" has "Ready":"False" status (will retry)
	W1010 18:20:09.867653  290755 node_ready.go:57] node "no-preload-556024" has "Ready":"False" status (will retry)
	I1010 18:20:10.367720  290755 node_ready.go:49] node "no-preload-556024" is "Ready"
	I1010 18:20:10.367749  290755 node_ready.go:38] duration metric: took 14.003439809s for node "no-preload-556024" to be "Ready" ...
	I1010 18:20:10.367766  290755 api_server.go:52] waiting for apiserver process to appear ...
	I1010 18:20:10.367820  290755 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 18:20:10.382024  290755 api_server.go:72] duration metric: took 14.302627352s to wait for apiserver process to appear ...
	I1010 18:20:10.382064  290755 api_server.go:88] waiting for apiserver healthz status ...
	I1010 18:20:10.382085  290755 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1010 18:20:10.387892  290755 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1010 18:20:10.388950  290755 api_server.go:141] control plane version: v1.34.1
	I1010 18:20:10.388973  290755 api_server.go:131] duration metric: took 6.901313ms to wait for apiserver health ...
	I1010 18:20:10.388982  290755 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 18:20:10.392027  290755 system_pods.go:59] 8 kube-system pods found
	I1010 18:20:10.392100  290755 system_pods.go:61] "coredns-66bc5c9577-wpsrd" [316be091-2de7-417c-b44b-1d26108e3ed3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:20:10.392116  290755 system_pods.go:61] "etcd-no-preload-556024" [0f8f77e3-e838-4f27-9f17-2cd264198574] Running
	I1010 18:20:10.392128  290755 system_pods.go:61] "kindnet-wsk6h" [71384861-5289-4d2b-8d62-b7d2c27d86b8] Running
	I1010 18:20:10.392133  290755 system_pods.go:61] "kube-apiserver-no-preload-556024" [7efe66ae-83bf-4ea5-a271-d8e944f74053] Running
	I1010 18:20:10.392139  290755 system_pods.go:61] "kube-controller-manager-no-preload-556024" [9e7fbd67-ce38-425d-b80d-b8ff3748fa70] Running
	I1010 18:20:10.392144  290755 system_pods.go:61] "kube-proxy-frchp" [3457ebf4-7608-4c78-b8dc-3a92a2fb32ae] Running
	I1010 18:20:10.392152  290755 system_pods.go:61] "kube-scheduler-no-preload-556024" [c6fb51f0-cf8d-4a56-aba5-95aff4190b44] Running
	I1010 18:20:10.392159  290755 system_pods.go:61] "storage-provisioner" [42a21c5e-4318-43f7-8d2a-dc62676b17c2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 18:20:10.392169  290755 system_pods.go:74] duration metric: took 3.180572ms to wait for pod list to return data ...
	I1010 18:20:10.392178  290755 default_sa.go:34] waiting for default service account to be created ...
	I1010 18:20:10.394728  290755 default_sa.go:45] found service account: "default"
	I1010 18:20:10.394744  290755 default_sa.go:55] duration metric: took 2.559159ms for default service account to be created ...
	I1010 18:20:10.394754  290755 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 18:20:10.397590  290755 system_pods.go:86] 8 kube-system pods found
	I1010 18:20:10.397616  290755 system_pods.go:89] "coredns-66bc5c9577-wpsrd" [316be091-2de7-417c-b44b-1d26108e3ed3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:20:10.397626  290755 system_pods.go:89] "etcd-no-preload-556024" [0f8f77e3-e838-4f27-9f17-2cd264198574] Running
	I1010 18:20:10.397631  290755 system_pods.go:89] "kindnet-wsk6h" [71384861-5289-4d2b-8d62-b7d2c27d86b8] Running
	I1010 18:20:10.397635  290755 system_pods.go:89] "kube-apiserver-no-preload-556024" [7efe66ae-83bf-4ea5-a271-d8e944f74053] Running
	I1010 18:20:10.397638  290755 system_pods.go:89] "kube-controller-manager-no-preload-556024" [9e7fbd67-ce38-425d-b80d-b8ff3748fa70] Running
	I1010 18:20:10.397642  290755 system_pods.go:89] "kube-proxy-frchp" [3457ebf4-7608-4c78-b8dc-3a92a2fb32ae] Running
	I1010 18:20:10.397646  290755 system_pods.go:89] "kube-scheduler-no-preload-556024" [c6fb51f0-cf8d-4a56-aba5-95aff4190b44] Running
	I1010 18:20:10.397650  290755 system_pods.go:89] "storage-provisioner" [42a21c5e-4318-43f7-8d2a-dc62676b17c2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 18:20:10.397669  290755 retry.go:31] will retry after 306.414769ms: missing components: kube-dns
	I1010 18:20:10.708655  290755 system_pods.go:86] 8 kube-system pods found
	I1010 18:20:10.708684  290755 system_pods.go:89] "coredns-66bc5c9577-wpsrd" [316be091-2de7-417c-b44b-1d26108e3ed3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:20:10.708690  290755 system_pods.go:89] "etcd-no-preload-556024" [0f8f77e3-e838-4f27-9f17-2cd264198574] Running
	I1010 18:20:10.708696  290755 system_pods.go:89] "kindnet-wsk6h" [71384861-5289-4d2b-8d62-b7d2c27d86b8] Running
	I1010 18:20:10.708700  290755 system_pods.go:89] "kube-apiserver-no-preload-556024" [7efe66ae-83bf-4ea5-a271-d8e944f74053] Running
	I1010 18:20:10.708704  290755 system_pods.go:89] "kube-controller-manager-no-preload-556024" [9e7fbd67-ce38-425d-b80d-b8ff3748fa70] Running
	I1010 18:20:10.708708  290755 system_pods.go:89] "kube-proxy-frchp" [3457ebf4-7608-4c78-b8dc-3a92a2fb32ae] Running
	I1010 18:20:10.708712  290755 system_pods.go:89] "kube-scheduler-no-preload-556024" [c6fb51f0-cf8d-4a56-aba5-95aff4190b44] Running
	I1010 18:20:10.708717  290755 system_pods.go:89] "storage-provisioner" [42a21c5e-4318-43f7-8d2a-dc62676b17c2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 18:20:10.708731  290755 retry.go:31] will retry after 353.029376ms: missing components: kube-dns
	I1010 18:20:11.065817  290755 system_pods.go:86] 8 kube-system pods found
	I1010 18:20:11.065854  290755 system_pods.go:89] "coredns-66bc5c9577-wpsrd" [316be091-2de7-417c-b44b-1d26108e3ed3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:20:11.065863  290755 system_pods.go:89] "etcd-no-preload-556024" [0f8f77e3-e838-4f27-9f17-2cd264198574] Running
	I1010 18:20:11.065871  290755 system_pods.go:89] "kindnet-wsk6h" [71384861-5289-4d2b-8d62-b7d2c27d86b8] Running
	I1010 18:20:11.065877  290755 system_pods.go:89] "kube-apiserver-no-preload-556024" [7efe66ae-83bf-4ea5-a271-d8e944f74053] Running
	I1010 18:20:11.065882  290755 system_pods.go:89] "kube-controller-manager-no-preload-556024" [9e7fbd67-ce38-425d-b80d-b8ff3748fa70] Running
	I1010 18:20:11.065885  290755 system_pods.go:89] "kube-proxy-frchp" [3457ebf4-7608-4c78-b8dc-3a92a2fb32ae] Running
	I1010 18:20:11.065889  290755 system_pods.go:89] "kube-scheduler-no-preload-556024" [c6fb51f0-cf8d-4a56-aba5-95aff4190b44] Running
	I1010 18:20:11.065893  290755 system_pods.go:89] "storage-provisioner" [42a21c5e-4318-43f7-8d2a-dc62676b17c2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 18:20:11.065908  290755 retry.go:31] will retry after 298.812158ms: missing components: kube-dns
	I1010 18:20:09.573247  297519 node_ready.go:49] node "embed-certs-472518" is "Ready"
	I1010 18:20:09.573277  297519 node_ready.go:38] duration metric: took 11.002920648s for node "embed-certs-472518" to be "Ready" ...
	I1010 18:20:09.573291  297519 api_server.go:52] waiting for apiserver process to appear ...
	I1010 18:20:09.573336  297519 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 18:20:09.586812  297519 api_server.go:72] duration metric: took 11.609194256s to wait for apiserver process to appear ...
	I1010 18:20:09.586839  297519 api_server.go:88] waiting for apiserver healthz status ...
	I1010 18:20:09.586861  297519 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1010 18:20:09.591934  297519 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1010 18:20:09.592963  297519 api_server.go:141] control plane version: v1.34.1
	I1010 18:20:09.592986  297519 api_server.go:131] duration metric: took 6.141246ms to wait for apiserver health ...
	I1010 18:20:09.592995  297519 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 18:20:09.595782  297519 system_pods.go:59] 8 kube-system pods found
	I1010 18:20:09.595815  297519 system_pods.go:61] "coredns-66bc5c9577-hrcxc" [98494133-86f7-4d52-9de0-1b648c4e1eac] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:20:09.595826  297519 system_pods.go:61] "etcd-embed-certs-472518" [ef258b42-940e-4df8-bda7-2abda18693ec] Running
	I1010 18:20:09.595835  297519 system_pods.go:61] "kindnet-kpr69" [a2bc6e25-f261-43aa-b10b-35757900e93b] Running
	I1010 18:20:09.595840  297519 system_pods.go:61] "kube-apiserver-embed-certs-472518" [d3c6aec3-5dbe-4bda-a057-5ac1cacd6dc8] Running
	I1010 18:20:09.595847  297519 system_pods.go:61] "kube-controller-manager-embed-certs-472518" [35d677fb-3f5f-4b3e-8175-60234a80c67e] Running
	I1010 18:20:09.595854  297519 system_pods.go:61] "kube-proxy-bq985" [e2d6bf76-4b03-4118-b61b-605d27646095] Running
	I1010 18:20:09.595863  297519 system_pods.go:61] "kube-scheduler-embed-certs-472518" [7ebab2fe-6192-45eb-80a1-a169ea655e6c] Running
	I1010 18:20:09.595872  297519 system_pods.go:61] "storage-provisioner" [3237266d-6c19-4af5-aef2-8d99c561d535] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 18:20:09.595881  297519 system_pods.go:74] duration metric: took 2.881711ms to wait for pod list to return data ...
	I1010 18:20:09.595890  297519 default_sa.go:34] waiting for default service account to be created ...
	I1010 18:20:09.598119  297519 default_sa.go:45] found service account: "default"
	I1010 18:20:09.598135  297519 default_sa.go:55] duration metric: took 2.239452ms for default service account to be created ...
	I1010 18:20:09.598143  297519 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 18:20:09.600686  297519 system_pods.go:86] 8 kube-system pods found
	I1010 18:20:09.600712  297519 system_pods.go:89] "coredns-66bc5c9577-hrcxc" [98494133-86f7-4d52-9de0-1b648c4e1eac] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:20:09.600719  297519 system_pods.go:89] "etcd-embed-certs-472518" [ef258b42-940e-4df8-bda7-2abda18693ec] Running
	I1010 18:20:09.600727  297519 system_pods.go:89] "kindnet-kpr69" [a2bc6e25-f261-43aa-b10b-35757900e93b] Running
	I1010 18:20:09.600733  297519 system_pods.go:89] "kube-apiserver-embed-certs-472518" [d3c6aec3-5dbe-4bda-a057-5ac1cacd6dc8] Running
	I1010 18:20:09.600742  297519 system_pods.go:89] "kube-controller-manager-embed-certs-472518" [35d677fb-3f5f-4b3e-8175-60234a80c67e] Running
	I1010 18:20:09.600747  297519 system_pods.go:89] "kube-proxy-bq985" [e2d6bf76-4b03-4118-b61b-605d27646095] Running
	I1010 18:20:09.600756  297519 system_pods.go:89] "kube-scheduler-embed-certs-472518" [7ebab2fe-6192-45eb-80a1-a169ea655e6c] Running
	I1010 18:20:09.600762  297519 system_pods.go:89] "storage-provisioner" [3237266d-6c19-4af5-aef2-8d99c561d535] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 18:20:09.600794  297519 retry.go:31] will retry after 227.824643ms: missing components: kube-dns
	I1010 18:20:09.832902  297519 system_pods.go:86] 8 kube-system pods found
	I1010 18:20:09.832939  297519 system_pods.go:89] "coredns-66bc5c9577-hrcxc" [98494133-86f7-4d52-9de0-1b648c4e1eac] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:20:09.832947  297519 system_pods.go:89] "etcd-embed-certs-472518" [ef258b42-940e-4df8-bda7-2abda18693ec] Running
	I1010 18:20:09.832953  297519 system_pods.go:89] "kindnet-kpr69" [a2bc6e25-f261-43aa-b10b-35757900e93b] Running
	I1010 18:20:09.832959  297519 system_pods.go:89] "kube-apiserver-embed-certs-472518" [d3c6aec3-5dbe-4bda-a057-5ac1cacd6dc8] Running
	I1010 18:20:09.832967  297519 system_pods.go:89] "kube-controller-manager-embed-certs-472518" [35d677fb-3f5f-4b3e-8175-60234a80c67e] Running
	I1010 18:20:09.832972  297519 system_pods.go:89] "kube-proxy-bq985" [e2d6bf76-4b03-4118-b61b-605d27646095] Running
	I1010 18:20:09.832977  297519 system_pods.go:89] "kube-scheduler-embed-certs-472518" [7ebab2fe-6192-45eb-80a1-a169ea655e6c] Running
	I1010 18:20:09.832985  297519 system_pods.go:89] "storage-provisioner" [3237266d-6c19-4af5-aef2-8d99c561d535] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 18:20:09.833004  297519 retry.go:31] will retry after 338.538324ms: missing components: kube-dns
	I1010 18:20:10.176048  297519 system_pods.go:86] 8 kube-system pods found
	I1010 18:20:10.176120  297519 system_pods.go:89] "coredns-66bc5c9577-hrcxc" [98494133-86f7-4d52-9de0-1b648c4e1eac] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:20:10.176128  297519 system_pods.go:89] "etcd-embed-certs-472518" [ef258b42-940e-4df8-bda7-2abda18693ec] Running
	I1010 18:20:10.176136  297519 system_pods.go:89] "kindnet-kpr69" [a2bc6e25-f261-43aa-b10b-35757900e93b] Running
	I1010 18:20:10.176142  297519 system_pods.go:89] "kube-apiserver-embed-certs-472518" [d3c6aec3-5dbe-4bda-a057-5ac1cacd6dc8] Running
	I1010 18:20:10.176148  297519 system_pods.go:89] "kube-controller-manager-embed-certs-472518" [35d677fb-3f5f-4b3e-8175-60234a80c67e] Running
	I1010 18:20:10.176154  297519 system_pods.go:89] "kube-proxy-bq985" [e2d6bf76-4b03-4118-b61b-605d27646095] Running
	I1010 18:20:10.176162  297519 system_pods.go:89] "kube-scheduler-embed-certs-472518" [7ebab2fe-6192-45eb-80a1-a169ea655e6c] Running
	I1010 18:20:10.176171  297519 system_pods.go:89] "storage-provisioner" [3237266d-6c19-4af5-aef2-8d99c561d535] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 18:20:10.176187  297519 retry.go:31] will retry after 474.840475ms: missing components: kube-dns
	I1010 18:20:10.655761  297519 system_pods.go:86] 8 kube-system pods found
	I1010 18:20:10.655795  297519 system_pods.go:89] "coredns-66bc5c9577-hrcxc" [98494133-86f7-4d52-9de0-1b648c4e1eac] Running
	I1010 18:20:10.655805  297519 system_pods.go:89] "etcd-embed-certs-472518" [ef258b42-940e-4df8-bda7-2abda18693ec] Running
	I1010 18:20:10.655811  297519 system_pods.go:89] "kindnet-kpr69" [a2bc6e25-f261-43aa-b10b-35757900e93b] Running
	I1010 18:20:10.655817  297519 system_pods.go:89] "kube-apiserver-embed-certs-472518" [d3c6aec3-5dbe-4bda-a057-5ac1cacd6dc8] Running
	I1010 18:20:10.655823  297519 system_pods.go:89] "kube-controller-manager-embed-certs-472518" [35d677fb-3f5f-4b3e-8175-60234a80c67e] Running
	I1010 18:20:10.655827  297519 system_pods.go:89] "kube-proxy-bq985" [e2d6bf76-4b03-4118-b61b-605d27646095] Running
	I1010 18:20:10.655833  297519 system_pods.go:89] "kube-scheduler-embed-certs-472518" [7ebab2fe-6192-45eb-80a1-a169ea655e6c] Running
	I1010 18:20:10.655836  297519 system_pods.go:89] "storage-provisioner" [3237266d-6c19-4af5-aef2-8d99c561d535] Running
	I1010 18:20:10.655843  297519 system_pods.go:126] duration metric: took 1.057695494s to wait for k8s-apps to be running ...
	I1010 18:20:10.655857  297519 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 18:20:10.655908  297519 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:20:10.670330  297519 system_svc.go:56] duration metric: took 14.455489ms WaitForService to wait for kubelet
	I1010 18:20:10.670364  297519 kubeadm.go:586] duration metric: took 12.692752474s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:20:10.670383  297519 node_conditions.go:102] verifying NodePressure condition ...
	I1010 18:20:10.673297  297519 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1010 18:20:10.673319  297519 node_conditions.go:123] node cpu capacity is 8
	I1010 18:20:10.673333  297519 node_conditions.go:105] duration metric: took 2.945605ms to run NodePressure ...
	I1010 18:20:10.673344  297519 start.go:241] waiting for startup goroutines ...
	I1010 18:20:10.673354  297519 start.go:246] waiting for cluster config update ...
	I1010 18:20:10.673372  297519 start.go:255] writing updated cluster config ...
	I1010 18:20:10.673653  297519 ssh_runner.go:195] Run: rm -f paused
	I1010 18:20:10.677437  297519 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 18:20:10.680685  297519 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hrcxc" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:20:10.684708  297519 pod_ready.go:94] pod "coredns-66bc5c9577-hrcxc" is "Ready"
	I1010 18:20:10.684727  297519 pod_ready.go:86] duration metric: took 4.024389ms for pod "coredns-66bc5c9577-hrcxc" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:20:10.686530  297519 pod_ready.go:83] waiting for pod "etcd-embed-certs-472518" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:20:10.689732  297519 pod_ready.go:94] pod "etcd-embed-certs-472518" is "Ready"
	I1010 18:20:10.689748  297519 pod_ready.go:86] duration metric: took 3.203667ms for pod "etcd-embed-certs-472518" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:20:10.691498  297519 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-472518" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:20:10.694767  297519 pod_ready.go:94] pod "kube-apiserver-embed-certs-472518" is "Ready"
	I1010 18:20:10.694782  297519 pod_ready.go:86] duration metric: took 3.265674ms for pod "kube-apiserver-embed-certs-472518" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:20:10.696392  297519 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-472518" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:20:11.081160  297519 pod_ready.go:94] pod "kube-controller-manager-embed-certs-472518" is "Ready"
	I1010 18:20:11.081184  297519 pod_ready.go:86] duration metric: took 384.775631ms for pod "kube-controller-manager-embed-certs-472518" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:20:11.282310  297519 pod_ready.go:83] waiting for pod "kube-proxy-bq985" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:20:11.682086  297519 pod_ready.go:94] pod "kube-proxy-bq985" is "Ready"
	I1010 18:20:11.682110  297519 pod_ready.go:86] duration metric: took 399.776954ms for pod "kube-proxy-bq985" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:20:11.882429  297519 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-472518" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:20:12.281286  297519 pod_ready.go:94] pod "kube-scheduler-embed-certs-472518" is "Ready"
	I1010 18:20:12.281312  297519 pod_ready.go:86] duration metric: took 398.861055ms for pod "kube-scheduler-embed-certs-472518" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:20:12.281323  297519 pod_ready.go:40] duration metric: took 1.603864094s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 18:20:12.327348  297519 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1010 18:20:12.329266  297519 out.go:179] * Done! kubectl is now configured to use "embed-certs-472518" cluster and "default" namespace by default
	I1010 18:20:11.367962  290755 system_pods.go:86] 8 kube-system pods found
	I1010 18:20:11.367991  290755 system_pods.go:89] "coredns-66bc5c9577-wpsrd" [316be091-2de7-417c-b44b-1d26108e3ed3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:20:11.367998  290755 system_pods.go:89] "etcd-no-preload-556024" [0f8f77e3-e838-4f27-9f17-2cd264198574] Running
	I1010 18:20:11.368005  290755 system_pods.go:89] "kindnet-wsk6h" [71384861-5289-4d2b-8d62-b7d2c27d86b8] Running
	I1010 18:20:11.368009  290755 system_pods.go:89] "kube-apiserver-no-preload-556024" [7efe66ae-83bf-4ea5-a271-d8e944f74053] Running
	I1010 18:20:11.368013  290755 system_pods.go:89] "kube-controller-manager-no-preload-556024" [9e7fbd67-ce38-425d-b80d-b8ff3748fa70] Running
	I1010 18:20:11.368016  290755 system_pods.go:89] "kube-proxy-frchp" [3457ebf4-7608-4c78-b8dc-3a92a2fb32ae] Running
	I1010 18:20:11.368019  290755 system_pods.go:89] "kube-scheduler-no-preload-556024" [c6fb51f0-cf8d-4a56-aba5-95aff4190b44] Running
	I1010 18:20:11.368024  290755 system_pods.go:89] "storage-provisioner" [42a21c5e-4318-43f7-8d2a-dc62676b17c2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 18:20:11.368037  290755 retry.go:31] will retry after 411.343691ms: missing components: kube-dns
	I1010 18:20:11.783310  290755 system_pods.go:86] 8 kube-system pods found
	I1010 18:20:11.783337  290755 system_pods.go:89] "coredns-66bc5c9577-wpsrd" [316be091-2de7-417c-b44b-1d26108e3ed3] Running
	I1010 18:20:11.783342  290755 system_pods.go:89] "etcd-no-preload-556024" [0f8f77e3-e838-4f27-9f17-2cd264198574] Running
	I1010 18:20:11.783346  290755 system_pods.go:89] "kindnet-wsk6h" [71384861-5289-4d2b-8d62-b7d2c27d86b8] Running
	I1010 18:20:11.783352  290755 system_pods.go:89] "kube-apiserver-no-preload-556024" [7efe66ae-83bf-4ea5-a271-d8e944f74053] Running
	I1010 18:20:11.783359  290755 system_pods.go:89] "kube-controller-manager-no-preload-556024" [9e7fbd67-ce38-425d-b80d-b8ff3748fa70] Running
	I1010 18:20:11.783364  290755 system_pods.go:89] "kube-proxy-frchp" [3457ebf4-7608-4c78-b8dc-3a92a2fb32ae] Running
	I1010 18:20:11.783369  290755 system_pods.go:89] "kube-scheduler-no-preload-556024" [c6fb51f0-cf8d-4a56-aba5-95aff4190b44] Running
	I1010 18:20:11.783374  290755 system_pods.go:89] "storage-provisioner" [42a21c5e-4318-43f7-8d2a-dc62676b17c2] Running
	I1010 18:20:11.783386  290755 system_pods.go:126] duration metric: took 1.388625172s to wait for k8s-apps to be running ...
	I1010 18:20:11.783398  290755 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 18:20:11.783444  290755 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:20:11.798276  290755 system_svc.go:56] duration metric: took 14.867854ms WaitForService to wait for kubelet
	I1010 18:20:11.798304  290755 kubeadm.go:586] duration metric: took 15.718913125s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:20:11.798326  290755 node_conditions.go:102] verifying NodePressure condition ...
	I1010 18:20:11.801500  290755 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1010 18:20:11.801520  290755 node_conditions.go:123] node cpu capacity is 8
	I1010 18:20:11.801533  290755 node_conditions.go:105] duration metric: took 3.202831ms to run NodePressure ...
	I1010 18:20:11.801546  290755 start.go:241] waiting for startup goroutines ...
	I1010 18:20:11.801555  290755 start.go:246] waiting for cluster config update ...
	I1010 18:20:11.801568  290755 start.go:255] writing updated cluster config ...
	I1010 18:20:11.801846  290755 ssh_runner.go:195] Run: rm -f paused
	I1010 18:20:11.806594  290755 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 18:20:11.809867  290755 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wpsrd" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:20:11.814039  290755 pod_ready.go:94] pod "coredns-66bc5c9577-wpsrd" is "Ready"
	I1010 18:20:11.814068  290755 pod_ready.go:86] duration metric: took 4.180229ms for pod "coredns-66bc5c9577-wpsrd" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:20:11.816004  290755 pod_ready.go:83] waiting for pod "etcd-no-preload-556024" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:20:11.819563  290755 pod_ready.go:94] pod "etcd-no-preload-556024" is "Ready"
	I1010 18:20:11.819581  290755 pod_ready.go:86] duration metric: took 3.561734ms for pod "etcd-no-preload-556024" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:20:11.821271  290755 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-556024" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:20:11.824740  290755 pod_ready.go:94] pod "kube-apiserver-no-preload-556024" is "Ready"
	I1010 18:20:11.824756  290755 pod_ready.go:86] duration metric: took 3.470619ms for pod "kube-apiserver-no-preload-556024" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:20:11.826529  290755 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-556024" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:20:12.210721  290755 pod_ready.go:94] pod "kube-controller-manager-no-preload-556024" is "Ready"
	I1010 18:20:12.210748  290755 pod_ready.go:86] duration metric: took 384.203992ms for pod "kube-controller-manager-no-preload-556024" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:20:12.411060  290755 pod_ready.go:83] waiting for pod "kube-proxy-frchp" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:20:12.811295  290755 pod_ready.go:94] pod "kube-proxy-frchp" is "Ready"
	I1010 18:20:12.811328  290755 pod_ready.go:86] duration metric: took 400.2378ms for pod "kube-proxy-frchp" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:20:13.012467  290755 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-556024" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:20:13.411236  290755 pod_ready.go:94] pod "kube-scheduler-no-preload-556024" is "Ready"
	I1010 18:20:13.411260  290755 pod_ready.go:86] duration metric: took 398.765227ms for pod "kube-scheduler-no-preload-556024" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:20:13.411272  290755 pod_ready.go:40] duration metric: took 1.604649899s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 18:20:13.458430  290755 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1010 18:20:13.460871  290755 out.go:179] * Done! kubectl is now configured to use "no-preload-556024" cluster and "default" namespace by default
	E1010 18:20:13.463018  290755 logFile.go:53] failed to close the audit log: invalid argument
	W1010 18:20:13.463030  290755 root.go:91] failed to log command end to audit: failed to convert logs to rows: failed to unmarshal "{\"specversion\":\"1.0\",\"id\":\"04f4725": unexpected end of JSON input
	
	
	==> CRI-O <==
	Oct 10 18:20:09 embed-certs-472518 crio[779]: time="2025-10-10T18:20:09.763093774Z" level=info msg="Starting container: acc7c63b51a4ea5a7c1293cdb840508c24c1ac70c84895ae8aba04da662798c1" id=c4e0b6c5-97f8-4bcd-b6f2-3ca28ab2dc12 name=/runtime.v1.RuntimeService/StartContainer
	Oct 10 18:20:09 embed-certs-472518 crio[779]: time="2025-10-10T18:20:09.765372144Z" level=info msg="Started container" PID=1849 containerID=acc7c63b51a4ea5a7c1293cdb840508c24c1ac70c84895ae8aba04da662798c1 description=kube-system/coredns-66bc5c9577-hrcxc/coredns id=c4e0b6c5-97f8-4bcd-b6f2-3ca28ab2dc12 name=/runtime.v1.RuntimeService/StartContainer sandboxID=30d0642d59ac0ed0116385f8d71a98eda1eb2e1fdedbf40aa18549d8d8f4c4b2
	Oct 10 18:20:12 embed-certs-472518 crio[779]: time="2025-10-10T18:20:12.766223938Z" level=info msg="Running pod sandbox: default/busybox/POD" id=5bccdb49-2738-4339-b491-1c31f5b7aec7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 10 18:20:12 embed-certs-472518 crio[779]: time="2025-10-10T18:20:12.766309496Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:20:12 embed-certs-472518 crio[779]: time="2025-10-10T18:20:12.772103964Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1fc626a896631b1e287691b32d0ca2d78a6b99fa3f6aebf0b85252cfbb0b7330 UID:f2253e59-f3f8-418a-a22e-e99da86065fd NetNS:/var/run/netns/eadc0afc-e6c0-4c22-bc4d-3d0048d940f8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00088c530}] Aliases:map[]}"
	Oct 10 18:20:12 embed-certs-472518 crio[779]: time="2025-10-10T18:20:12.772143204Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 10 18:20:12 embed-certs-472518 crio[779]: time="2025-10-10T18:20:12.782805262Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1fc626a896631b1e287691b32d0ca2d78a6b99fa3f6aebf0b85252cfbb0b7330 UID:f2253e59-f3f8-418a-a22e-e99da86065fd NetNS:/var/run/netns/eadc0afc-e6c0-4c22-bc4d-3d0048d940f8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00088c530}] Aliases:map[]}"
	Oct 10 18:20:12 embed-certs-472518 crio[779]: time="2025-10-10T18:20:12.782967954Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 10 18:20:12 embed-certs-472518 crio[779]: time="2025-10-10T18:20:12.783711517Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 10 18:20:12 embed-certs-472518 crio[779]: time="2025-10-10T18:20:12.784758005Z" level=info msg="Ran pod sandbox 1fc626a896631b1e287691b32d0ca2d78a6b99fa3f6aebf0b85252cfbb0b7330 with infra container: default/busybox/POD" id=5bccdb49-2738-4339-b491-1c31f5b7aec7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 10 18:20:12 embed-certs-472518 crio[779]: time="2025-10-10T18:20:12.786121088Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=44ff176e-372b-4862-a2cb-bef6158397d8 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:20:12 embed-certs-472518 crio[779]: time="2025-10-10T18:20:12.786226513Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=44ff176e-372b-4862-a2cb-bef6158397d8 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:20:12 embed-certs-472518 crio[779]: time="2025-10-10T18:20:12.786272545Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=44ff176e-372b-4862-a2cb-bef6158397d8 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:20:12 embed-certs-472518 crio[779]: time="2025-10-10T18:20:12.787084007Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8fa7b8f7-de79-4c55-a284-f1ab2a3fd012 name=/runtime.v1.ImageService/PullImage
	Oct 10 18:20:12 embed-certs-472518 crio[779]: time="2025-10-10T18:20:12.789263932Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 10 18:20:15 embed-certs-472518 crio[779]: time="2025-10-10T18:20:15.631293986Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=8fa7b8f7-de79-4c55-a284-f1ab2a3fd012 name=/runtime.v1.ImageService/PullImage
	Oct 10 18:20:15 embed-certs-472518 crio[779]: time="2025-10-10T18:20:15.63211443Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=75493d2e-eda0-4c32-b394-569605a2a3c4 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:20:15 embed-certs-472518 crio[779]: time="2025-10-10T18:20:15.633628679Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=78567957-d8cd-432d-970b-ee740971b9f3 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:20:15 embed-certs-472518 crio[779]: time="2025-10-10T18:20:15.6369349Z" level=info msg="Creating container: default/busybox/busybox" id=fdfcae0b-c0c0-41ed-a379-17fccb7d84ef name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:20:15 embed-certs-472518 crio[779]: time="2025-10-10T18:20:15.637737931Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:20:15 embed-certs-472518 crio[779]: time="2025-10-10T18:20:15.641325224Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:20:15 embed-certs-472518 crio[779]: time="2025-10-10T18:20:15.642296586Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:20:15 embed-certs-472518 crio[779]: time="2025-10-10T18:20:15.669910683Z" level=info msg="Created container de1817c5070d2cebec7c7cc7106a631405a990f8f3baa69ef1dad25ffeb09856: default/busybox/busybox" id=fdfcae0b-c0c0-41ed-a379-17fccb7d84ef name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:20:15 embed-certs-472518 crio[779]: time="2025-10-10T18:20:15.670578764Z" level=info msg="Starting container: de1817c5070d2cebec7c7cc7106a631405a990f8f3baa69ef1dad25ffeb09856" id=ba3150a4-d190-4e95-80e1-f94f9f0f934c name=/runtime.v1.RuntimeService/StartContainer
	Oct 10 18:20:15 embed-certs-472518 crio[779]: time="2025-10-10T18:20:15.672502995Z" level=info msg="Started container" PID=1930 containerID=de1817c5070d2cebec7c7cc7106a631405a990f8f3baa69ef1dad25ffeb09856 description=default/busybox/busybox id=ba3150a4-d190-4e95-80e1-f94f9f0f934c name=/runtime.v1.RuntimeService/StartContainer sandboxID=1fc626a896631b1e287691b32d0ca2d78a6b99fa3f6aebf0b85252cfbb0b7330
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	de1817c5070d2       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   1fc626a896631       busybox                                      default
	acc7c63b51a4e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      13 seconds ago      Running             coredns                   0                   30d0642d59ac0       coredns-66bc5c9577-hrcxc                     kube-system
	9dedd0827a679       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   588e8eccfe372       storage-provisioner                          kube-system
	e60a59f97e1a8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      24 seconds ago      Running             kindnet-cni               0                   5e64f9ac8256b       kindnet-kpr69                                kube-system
	ad718ce080a11       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      24 seconds ago      Running             kube-proxy                0                   4cb417477464a       kube-proxy-bq985                             kube-system
	396b8c11ebe4b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      35 seconds ago      Running             kube-apiserver            0                   a2da9eac314aa       kube-apiserver-embed-certs-472518            kube-system
	5107d15cc000d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      35 seconds ago      Running             kube-controller-manager   0                   46e1cc30b572a       kube-controller-manager-embed-certs-472518   kube-system
	1232d784edeca       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      35 seconds ago      Running             etcd                      0                   910822ca25fda       etcd-embed-certs-472518                      kube-system
	2a0d380a03d61       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      35 seconds ago      Running             kube-scheduler            0                   d98e846d17026       kube-scheduler-embed-certs-472518            kube-system
	
	
	==> coredns [acc7c63b51a4ea5a7c1293cdb840508c24c1ac70c84895ae8aba04da662798c1] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59459 - 63050 "HINFO IN 5599368400819102367.4595735492191758124. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.031762459s
	
	
	==> describe nodes <==
	Name:               embed-certs-472518
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-472518
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad692bf4ab89f0e135b80e730ae25010479ecc46
	                    minikube.k8s.io/name=embed-certs-472518
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_10T18_19_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 10 Oct 2025 18:19:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-472518
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 10 Oct 2025 18:20:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 10 Oct 2025 18:20:09 +0000   Fri, 10 Oct 2025 18:19:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 10 Oct 2025 18:20:09 +0000   Fri, 10 Oct 2025 18:19:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 10 Oct 2025 18:20:09 +0000   Fri, 10 Oct 2025 18:19:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 10 Oct 2025 18:20:09 +0000   Fri, 10 Oct 2025 18:20:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-472518
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 6694834041ede3e9eb1b67e168e90e0c
	  System UUID:                48a864d3-5370-4000-a149-d46b202f0181
	  Boot ID:                    830c8438-99e6-48ba-b543-66e651cad0c8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-hrcxc                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-embed-certs-472518                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-kpr69                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-embed-certs-472518             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-embed-certs-472518    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-bq985                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-embed-certs-472518             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s   kubelet          Node embed-certs-472518 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s   kubelet          Node embed-certs-472518 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s   kubelet          Node embed-certs-472518 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node embed-certs-472518 event: Registered Node embed-certs-472518 in Controller
	  Normal  NodeReady                14s   kubelet          Node embed-certs-472518 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 95 0c 3e 92 2e 08 06
	[  +0.052845] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 a5 06 76 2d e3 08 06
	[ +11.354316] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff fa c6 ff 04 55 d6 08 06
	[  +7.101927] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e6 9b 73 27 8c 80 08 06
	[  +0.000350] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 a5 06 76 2d e3 08 06
	[  +6.287191] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 27 2d 28 d6 46 08 06
	[  +0.000293] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa c6 ff 04 55 d6 08 06
	[Oct10 18:19] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 8c 22 f6 6b cf 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e 29 bf 13 20 f9 08 06
	[ +15.511156] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e d6 74 aa 27 d0 08 06
	[  +0.008495] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 af 05 d4 db d1 08 06
	[Oct10 18:20] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 0b 54 33 52 4e 08 06
	[  +0.000597] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 af 05 d4 db d1 08 06
	
	
	==> etcd [1232d784edeca61bc246377fb0690077f7dd1f536518940b81e1f8679ecd0f3d] <==
	{"level":"warn","ts":"2025-10-10T18:19:49.256355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:49.268892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:49.276884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:49.283802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:49.291316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:49.297600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:49.304109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:49.313599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:49.320036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:49.327011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:49.333814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:49.340077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:49.346866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:49.353143Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:49.358981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:49.365483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:49.372159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:49.378802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:49.385308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:49.392203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:49.398516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:49.413906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:49.421110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:49.427381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:49.476219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44464","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:20:23 up  1:02,  0 user,  load average: 4.71, 4.17, 2.66
	Linux embed-certs-472518 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e60a59f97e1a85a15ea83b2aae0c5538953295ae5c084bc6f1726dc468fb1da9] <==
	I1010 18:19:58.699591       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1010 18:19:58.700234       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1010 18:19:58.700408       1 main.go:148] setting mtu 1500 for CNI 
	I1010 18:19:58.700429       1 main.go:178] kindnetd IP family: "ipv4"
	I1010 18:19:58.700453       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-10T18:19:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1010 18:19:58.997093       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1010 18:19:58.997126       1 controller.go:381] "Waiting for informer caches to sync"
	I1010 18:19:58.997137       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1010 18:19:58.997275       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1010 18:19:59.397388       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1010 18:19:59.397489       1 metrics.go:72] Registering metrics
	I1010 18:19:59.397579       1 controller.go:711] "Syncing nftables rules"
	I1010 18:20:08.998133       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1010 18:20:08.998198       1 main.go:301] handling current node
	I1010 18:20:18.998173       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1010 18:20:18.998207       1 main.go:301] handling current node
	
	
	==> kube-apiserver [396b8c11ebe4b3b60777e71f99c06315614ad15e414a24f456e111d12bd448dd] <==
	I1010 18:19:49.970607       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1010 18:19:49.971612       1 controller.go:667] quota admission added evaluator for: namespaces
	I1010 18:19:49.976281       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1010 18:19:49.977100       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1010 18:19:49.982324       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1010 18:19:49.982411       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1010 18:19:49.991323       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1010 18:19:50.874633       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1010 18:19:50.878361       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1010 18:19:50.878378       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1010 18:19:51.363274       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1010 18:19:51.406762       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1010 18:19:51.478456       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1010 18:19:51.490780       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1010 18:19:51.492002       1 controller.go:667] quota admission added evaluator for: endpoints
	I1010 18:19:51.496484       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1010 18:19:51.906975       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1010 18:19:52.336012       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1010 18:19:52.346633       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1010 18:19:52.356895       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1010 18:19:57.666618       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1010 18:19:57.768034       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1010 18:19:57.774310       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1010 18:19:57.962668       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1010 18:20:21.554029       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:42710: use of closed network connection
	
	
	==> kube-controller-manager [5107d15cc000d002bf3388c29f8efe4242880ba3fe8d42524692373fbdf37815] <==
	I1010 18:19:56.906665       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1010 18:19:56.906716       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1010 18:19:56.906686       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1010 18:19:56.906828       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1010 18:19:56.906853       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1010 18:19:56.906915       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1010 18:19:56.907011       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1010 18:19:56.907199       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1010 18:19:56.907214       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1010 18:19:56.907203       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1010 18:19:56.907342       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1010 18:19:56.907360       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1010 18:19:56.907381       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1010 18:19:56.908485       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1010 18:19:56.909680       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1010 18:19:56.912469       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1010 18:19:56.913671       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1010 18:19:56.914997       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1010 18:19:56.916083       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1010 18:19:56.925139       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1010 18:19:56.929410       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1010 18:19:56.929496       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1010 18:19:56.929573       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-472518"
	I1010 18:19:56.929629       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1010 18:20:11.932395       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ad718ce080a112cbb5b14c064d28b495b8db6428dcc080e0c489189f9047b3ef] <==
	I1010 18:19:58.494670       1 server_linux.go:53] "Using iptables proxy"
	I1010 18:19:58.653405       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1010 18:19:58.754113       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1010 18:19:58.754236       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1010 18:19:58.754440       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1010 18:19:58.783127       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1010 18:19:58.783199       1 server_linux.go:132] "Using iptables Proxier"
	I1010 18:19:58.793360       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1010 18:19:58.793853       1 server.go:527] "Version info" version="v1.34.1"
	I1010 18:19:58.793878       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 18:19:58.796351       1 config.go:200] "Starting service config controller"
	I1010 18:19:58.796486       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1010 18:19:58.796447       1 config.go:106] "Starting endpoint slice config controller"
	I1010 18:19:58.796562       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1010 18:19:58.797766       1 config.go:309] "Starting node config controller"
	I1010 18:19:58.800130       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1010 18:19:58.800151       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1010 18:19:58.798104       1 config.go:403] "Starting serviceCIDR config controller"
	I1010 18:19:58.800161       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1010 18:19:58.800167       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1010 18:19:58.897541       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1010 18:19:58.898108       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2a0d380a03d61a1716a5e4139007cab5f02791a2515e68bef5c4c2b186915d30] <==
	E1010 18:19:49.929386       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1010 18:19:49.929411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1010 18:19:49.929414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1010 18:19:49.929492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1010 18:19:49.929499       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1010 18:19:49.929677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1010 18:19:49.929742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1010 18:19:49.929776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1010 18:19:49.930443       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1010 18:19:49.930494       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1010 18:19:49.930638       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1010 18:19:49.930812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1010 18:19:49.931302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1010 18:19:50.770986       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1010 18:19:50.777121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1010 18:19:50.793252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1010 18:19:50.816749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1010 18:19:50.834184       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1010 18:19:50.838515       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1010 18:19:50.922732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1010 18:19:50.990460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1010 18:19:50.998649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1010 18:19:51.077137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1010 18:19:51.116402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1010 18:19:52.626525       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 10 18:19:53 embed-certs-472518 kubelet[1323]: I1010 18:19:53.272384    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-472518" podStartSLOduration=1.272357725 podStartE2EDuration="1.272357725s" podCreationTimestamp="2025-10-10 18:19:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:19:53.258370878 +0000 UTC m=+1.159414599" watchObservedRunningTime="2025-10-10 18:19:53.272357725 +0000 UTC m=+1.173401446"
	Oct 10 18:19:53 embed-certs-472518 kubelet[1323]: I1010 18:19:53.285643    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-472518" podStartSLOduration=1.285623264 podStartE2EDuration="1.285623264s" podCreationTimestamp="2025-10-10 18:19:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:19:53.285590581 +0000 UTC m=+1.186634307" watchObservedRunningTime="2025-10-10 18:19:53.285623264 +0000 UTC m=+1.186666984"
	Oct 10 18:19:53 embed-certs-472518 kubelet[1323]: I1010 18:19:53.285887    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-472518" podStartSLOduration=1.285871092 podStartE2EDuration="1.285871092s" podCreationTimestamp="2025-10-10 18:19:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:19:53.272308591 +0000 UTC m=+1.173352313" watchObservedRunningTime="2025-10-10 18:19:53.285871092 +0000 UTC m=+1.186914808"
	Oct 10 18:19:56 embed-certs-472518 kubelet[1323]: I1010 18:19:56.913623    1323 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 10 18:19:56 embed-certs-472518 kubelet[1323]: I1010 18:19:56.914446    1323 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 10 18:19:57 embed-certs-472518 kubelet[1323]: I1010 18:19:57.995753    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-472518" podStartSLOduration=5.995692604 podStartE2EDuration="5.995692604s" podCreationTimestamp="2025-10-10 18:19:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:19:53.296195817 +0000 UTC m=+1.197239538" watchObservedRunningTime="2025-10-10 18:19:57.995692604 +0000 UTC m=+5.896736316"
	Oct 10 18:19:58 embed-certs-472518 kubelet[1323]: I1010 18:19:58.012738    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2d6bf76-4b03-4118-b61b-605d27646095-xtables-lock\") pod \"kube-proxy-bq985\" (UID: \"e2d6bf76-4b03-4118-b61b-605d27646095\") " pod="kube-system/kube-proxy-bq985"
	Oct 10 18:19:58 embed-certs-472518 kubelet[1323]: I1010 18:19:58.012796    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a2bc6e25-f261-43aa-b10b-35757900e93b-cni-cfg\") pod \"kindnet-kpr69\" (UID: \"a2bc6e25-f261-43aa-b10b-35757900e93b\") " pod="kube-system/kindnet-kpr69"
	Oct 10 18:19:58 embed-certs-472518 kubelet[1323]: I1010 18:19:58.012830    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e2d6bf76-4b03-4118-b61b-605d27646095-kube-proxy\") pod \"kube-proxy-bq985\" (UID: \"e2d6bf76-4b03-4118-b61b-605d27646095\") " pod="kube-system/kube-proxy-bq985"
	Oct 10 18:19:58 embed-certs-472518 kubelet[1323]: I1010 18:19:58.012853    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2d6bf76-4b03-4118-b61b-605d27646095-lib-modules\") pod \"kube-proxy-bq985\" (UID: \"e2d6bf76-4b03-4118-b61b-605d27646095\") " pod="kube-system/kube-proxy-bq985"
	Oct 10 18:19:58 embed-certs-472518 kubelet[1323]: I1010 18:19:58.012885    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2bc6e25-f261-43aa-b10b-35757900e93b-xtables-lock\") pod \"kindnet-kpr69\" (UID: \"a2bc6e25-f261-43aa-b10b-35757900e93b\") " pod="kube-system/kindnet-kpr69"
	Oct 10 18:19:58 embed-certs-472518 kubelet[1323]: I1010 18:19:58.012906    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2bc6e25-f261-43aa-b10b-35757900e93b-lib-modules\") pod \"kindnet-kpr69\" (UID: \"a2bc6e25-f261-43aa-b10b-35757900e93b\") " pod="kube-system/kindnet-kpr69"
	Oct 10 18:19:58 embed-certs-472518 kubelet[1323]: I1010 18:19:58.012930    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c274\" (UniqueName: \"kubernetes.io/projected/e2d6bf76-4b03-4118-b61b-605d27646095-kube-api-access-4c274\") pod \"kube-proxy-bq985\" (UID: \"e2d6bf76-4b03-4118-b61b-605d27646095\") " pod="kube-system/kube-proxy-bq985"
	Oct 10 18:19:58 embed-certs-472518 kubelet[1323]: I1010 18:19:58.013014    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xlxq\" (UniqueName: \"kubernetes.io/projected/a2bc6e25-f261-43aa-b10b-35757900e93b-kube-api-access-4xlxq\") pod \"kindnet-kpr69\" (UID: \"a2bc6e25-f261-43aa-b10b-35757900e93b\") " pod="kube-system/kindnet-kpr69"
	Oct 10 18:19:59 embed-certs-472518 kubelet[1323]: I1010 18:19:59.236802    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bq985" podStartSLOduration=2.2367783980000002 podStartE2EDuration="2.236778398s" podCreationTimestamp="2025-10-10 18:19:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:19:59.236680025 +0000 UTC m=+7.137723751" watchObservedRunningTime="2025-10-10 18:19:59.236778398 +0000 UTC m=+7.137822119"
	Oct 10 18:19:59 embed-certs-472518 kubelet[1323]: I1010 18:19:59.802937    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-kpr69" podStartSLOduration=2.802914083 podStartE2EDuration="2.802914083s" podCreationTimestamp="2025-10-10 18:19:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:19:59.260825243 +0000 UTC m=+7.161868964" watchObservedRunningTime="2025-10-10 18:19:59.802914083 +0000 UTC m=+7.703957801"
	Oct 10 18:20:09 embed-certs-472518 kubelet[1323]: I1010 18:20:09.383066    1323 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 10 18:20:09 embed-certs-472518 kubelet[1323]: I1010 18:20:09.499205    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49szt\" (UniqueName: \"kubernetes.io/projected/3237266d-6c19-4af5-aef2-8d99c561d535-kube-api-access-49szt\") pod \"storage-provisioner\" (UID: \"3237266d-6c19-4af5-aef2-8d99c561d535\") " pod="kube-system/storage-provisioner"
	Oct 10 18:20:09 embed-certs-472518 kubelet[1323]: I1010 18:20:09.499269    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98494133-86f7-4d52-9de0-1b648c4e1eac-config-volume\") pod \"coredns-66bc5c9577-hrcxc\" (UID: \"98494133-86f7-4d52-9de0-1b648c4e1eac\") " pod="kube-system/coredns-66bc5c9577-hrcxc"
	Oct 10 18:20:09 embed-certs-472518 kubelet[1323]: I1010 18:20:09.499326    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg5cj\" (UniqueName: \"kubernetes.io/projected/98494133-86f7-4d52-9de0-1b648c4e1eac-kube-api-access-lg5cj\") pod \"coredns-66bc5c9577-hrcxc\" (UID: \"98494133-86f7-4d52-9de0-1b648c4e1eac\") " pod="kube-system/coredns-66bc5c9577-hrcxc"
	Oct 10 18:20:09 embed-certs-472518 kubelet[1323]: I1010 18:20:09.499373    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3237266d-6c19-4af5-aef2-8d99c561d535-tmp\") pod \"storage-provisioner\" (UID: \"3237266d-6c19-4af5-aef2-8d99c561d535\") " pod="kube-system/storage-provisioner"
	Oct 10 18:20:10 embed-certs-472518 kubelet[1323]: I1010 18:20:10.260713    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-hrcxc" podStartSLOduration=12.260688974 podStartE2EDuration="12.260688974s" podCreationTimestamp="2025-10-10 18:19:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:20:10.260680018 +0000 UTC m=+18.161723739" watchObservedRunningTime="2025-10-10 18:20:10.260688974 +0000 UTC m=+18.161732698"
	Oct 10 18:20:10 embed-certs-472518 kubelet[1323]: I1010 18:20:10.281267    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.281245016 podStartE2EDuration="12.281245016s" podCreationTimestamp="2025-10-10 18:19:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:20:10.270168424 +0000 UTC m=+18.171212144" watchObservedRunningTime="2025-10-10 18:20:10.281245016 +0000 UTC m=+18.182288737"
	Oct 10 18:20:12 embed-certs-472518 kubelet[1323]: I1010 18:20:12.517675    1323 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f78tk\" (UniqueName: \"kubernetes.io/projected/f2253e59-f3f8-418a-a22e-e99da86065fd-kube-api-access-f78tk\") pod \"busybox\" (UID: \"f2253e59-f3f8-418a-a22e-e99da86065fd\") " pod="default/busybox"
	Oct 10 18:20:16 embed-certs-472518 kubelet[1323]: I1010 18:20:16.282619    1323 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.436110857 podStartE2EDuration="4.282565719s" podCreationTimestamp="2025-10-10 18:20:12 +0000 UTC" firstStartedPulling="2025-10-10 18:20:12.7865798 +0000 UTC m=+20.687623501" lastFinishedPulling="2025-10-10 18:20:15.63303465 +0000 UTC m=+23.534078363" observedRunningTime="2025-10-10 18:20:16.281651741 +0000 UTC m=+24.182695478" watchObservedRunningTime="2025-10-10 18:20:16.282565719 +0000 UTC m=+24.183609440"
	
	
	==> storage-provisioner [9dedd0827a679d9979e25cced6f2e94aa1b110a86d099b64ce454d72b0065743] <==
	I1010 18:20:09.772766       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1010 18:20:09.781693       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1010 18:20:09.781754       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1010 18:20:09.783720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:20:09.789244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1010 18:20:09.789437       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1010 18:20:09.789491       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a789c1a6-8b74-43de-be1d-69d02ac1d0c8", APIVersion:"v1", ResourceVersion:"403", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-472518_199ac190-f0ea-44c3-9798-5c2d1d6b5515 became leader
	I1010 18:20:09.789671       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-472518_199ac190-f0ea-44c3-9798-5c2d1d6b5515!
	W1010 18:20:09.791877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:20:09.795607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1010 18:20:09.890736       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-472518_199ac190-f0ea-44c3-9798-5c2d1d6b5515!
	W1010 18:20:11.799246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:20:11.803627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:20:13.807073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:20:13.811990       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:20:15.814813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:20:15.818663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:20:17.821728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:20:17.826358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:20:19.830355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:20:19.835461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:20:21.839194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:20:21.843849       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-472518 -n embed-certs-472518
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-472518 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-556024 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-556024 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (253.458322ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:20:25Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-556024 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-556024 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-556024 describe deploy/metrics-server -n kube-system: exit status 1 (60.911255ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-556024 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-556024
helpers_test.go:243: (dbg) docker inspect no-preload-556024:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6784c6613c75278df31ee5b4585740e0799e214fb638665fee04ee1f04ba890d",
	        "Created": "2025-10-10T18:19:17.136910644Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 291643,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-10T18:19:17.18418589Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:84da1fc78d37190122f56c520913b0bfc454516bc5fdbdc209e2a5258afce8c3",
	        "ResolvConfPath": "/var/lib/docker/containers/6784c6613c75278df31ee5b4585740e0799e214fb638665fee04ee1f04ba890d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6784c6613c75278df31ee5b4585740e0799e214fb638665fee04ee1f04ba890d/hostname",
	        "HostsPath": "/var/lib/docker/containers/6784c6613c75278df31ee5b4585740e0799e214fb638665fee04ee1f04ba890d/hosts",
	        "LogPath": "/var/lib/docker/containers/6784c6613c75278df31ee5b4585740e0799e214fb638665fee04ee1f04ba890d/6784c6613c75278df31ee5b4585740e0799e214fb638665fee04ee1f04ba890d-json.log",
	        "Name": "/no-preload-556024",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-556024:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-556024",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6784c6613c75278df31ee5b4585740e0799e214fb638665fee04ee1f04ba890d",
	                "LowerDir": "/var/lib/docker/overlay2/0169bc262a39812abb813c6e4211d609913a24a3210914aa4b5ed073144773c7-init/diff:/var/lib/docker/overlay2/9995a0af7efc4d83e8e62526a6cf13ffc5df3bab5cee59077c863040f7e3e58d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0169bc262a39812abb813c6e4211d609913a24a3210914aa4b5ed073144773c7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0169bc262a39812abb813c6e4211d609913a24a3210914aa4b5ed073144773c7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0169bc262a39812abb813c6e4211d609913a24a3210914aa4b5ed073144773c7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-556024",
	                "Source": "/var/lib/docker/volumes/no-preload-556024/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-556024",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-556024",
	                "name.minikube.sigs.k8s.io": "no-preload-556024",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "31f6e649876f9061c4395eb1b4e9930d24be16870e0f622a95a89d30b1452506",
	            "SandboxKey": "/var/run/docker/netns/31f6e649876f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-556024": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:a2:2d:7e:3a:a7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "62177a68d9eb1c876ff604502e8d1e7d060441f560a7646d94ff4c9f62d14c4b",
	                    "EndpointID": "ab486b3dcabe55e08ced57560baeb08a0d82a904c2509adcb2a537bdfc0c9f4d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-556024",
	                        "6784c6613c75"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-556024 -n no-preload-556024
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-556024 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-556024 logs -n 25: (1.056969006s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p bridge-078032 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                                                        │ bridge-078032          │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                         │ bridge-078032          │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ ssh     │ -p bridge-078032 sudo systemctl cat docker --no-pager                                                                                                                                                                                         │ bridge-078032          │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ bridge-078032          │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ ssh     │ -p bridge-078032 sudo docker system info                                                                                                                                                                                                      │ bridge-078032          │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ ssh     │ -p bridge-078032 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ bridge-078032          │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ ssh     │ -p bridge-078032 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ bridge-078032          │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ bridge-078032          │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ ssh     │ -p bridge-078032 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-078032          │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-078032          │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-078032          │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ ssh     │ -p bridge-078032 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-078032          │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-078032          │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-078032          │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ addons  │ enable metrics-server -p embed-certs-472518 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-472518     │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ ssh     │ -p bridge-078032 sudo containerd config dump                                                                                                                                                                                                  │ bridge-078032          │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-078032          │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-078032          │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-078032          │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo crio config                                                                                                                                                                                                             │ bridge-078032          │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ delete  │ -p bridge-078032                                                                                                                                                                                                                              │ bridge-078032          │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-141193 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-141193 │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p old-k8s-version-141193 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-141193 │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ stop    │ -p embed-certs-472518 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-472518     │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-556024 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-556024      │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/10 18:20:23
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 18:20:23.895655  309154 out.go:360] Setting OutFile to fd 1 ...
	I1010 18:20:23.895892  309154 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:20:23.895900  309154 out.go:374] Setting ErrFile to fd 2...
	I1010 18:20:23.895905  309154 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:20:23.896115  309154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 18:20:23.896565  309154 out.go:368] Setting JSON to false
	I1010 18:20:23.898641  309154 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3764,"bootTime":1760116660,"procs":307,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 18:20:23.898740  309154 start.go:141] virtualization: kvm guest
	I1010 18:20:23.901368  309154 out.go:179] * [old-k8s-version-141193] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1010 18:20:23.902824  309154 notify.go:220] Checking for updates...
	I1010 18:20:23.902862  309154 out.go:179]   - MINIKUBE_LOCATION=21724
	I1010 18:20:23.904333  309154 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 18:20:23.905586  309154 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:20:23.906678  309154 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-5815/.minikube
	I1010 18:20:23.907802  309154 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 18:20:23.909214  309154 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 18:20:23.911036  309154 config.go:182] Loaded profile config "old-k8s-version-141193": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1010 18:20:23.912666  309154 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1010 18:20:23.913757  309154 driver.go:421] Setting default libvirt URI to qemu:///system
	I1010 18:20:23.944234  309154 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1010 18:20:23.944401  309154 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:20:24.022040  309154 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:81 SystemTime:2025-10-10 18:20:24.003327833 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:20:24.022212  309154 docker.go:318] overlay module found
	I1010 18:20:24.024252  309154 out.go:179] * Using the docker driver based on existing profile
	I1010 18:20:24.025962  309154 start.go:305] selected driver: docker
	I1010 18:20:24.026020  309154 start.go:925] validating driver "docker" against &{Name:old-k8s-version-141193 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-141193 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:20:24.026194  309154 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 18:20:24.027256  309154 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:20:24.103733  309154 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:81 SystemTime:2025-10-10 18:20:24.092895613 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:20:24.104118  309154 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:20:24.104152  309154 cni.go:84] Creating CNI manager for ""
	I1010 18:20:24.104219  309154 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:20:24.104278  309154 start.go:349] cluster config:
	{Name:old-k8s-version-141193 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-141193 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:20:24.106474  309154 out.go:179] * Starting "old-k8s-version-141193" primary control-plane node in "old-k8s-version-141193" cluster
	I1010 18:20:24.107720  309154 cache.go:123] Beginning downloading kic base image for docker with crio
	I1010 18:20:24.109139  309154 out.go:179] * Pulling base image v0.0.48-1760103811-21724 ...
	I1010 18:20:24.110296  309154 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1010 18:20:24.110338  309154 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1010 18:20:24.110368  309154 cache.go:58] Caching tarball of preloaded images
	I1010 18:20:24.110408  309154 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon
	I1010 18:20:24.110509  309154 preload.go:233] Found /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 18:20:24.110525  309154 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1010 18:20:24.110638  309154 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/old-k8s-version-141193/config.json ...
	I1010 18:20:24.133222  309154 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon, skipping pull
	I1010 18:20:24.133245  309154 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 exists in daemon, skipping load
	I1010 18:20:24.133266  309154 cache.go:232] Successfully downloaded all kic artifacts
	I1010 18:20:24.133295  309154 start.go:360] acquireMachinesLock for old-k8s-version-141193: {Name:mk3087e22cd7dce7e6ebd7da3b62051c608e42a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:20:24.133364  309154 start.go:364] duration metric: took 45.019µs to acquireMachinesLock for "old-k8s-version-141193"
	I1010 18:20:24.133391  309154 start.go:96] Skipping create...Using existing machine configuration
	I1010 18:20:24.133398  309154 fix.go:54] fixHost starting: 
	I1010 18:20:24.133654  309154 cli_runner.go:164] Run: docker container inspect old-k8s-version-141193 --format={{.State.Status}}
	I1010 18:20:24.155192  309154 fix.go:112] recreateIfNeeded on old-k8s-version-141193: state=Stopped err=<nil>
	W1010 18:20:24.155227  309154 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Oct 10 18:20:10 no-preload-556024 crio[839]: time="2025-10-10T18:20:10.572578396Z" level=info msg="Starting container: 964c05dda5eb2363483268b98612c82c61a6fd27f282c81f883e0ec661b121c6" id=66d48108-0727-4488-a507-8d31d8ed119c name=/runtime.v1.RuntimeService/StartContainer
	Oct 10 18:20:10 no-preload-556024 crio[839]: time="2025-10-10T18:20:10.57457299Z" level=info msg="Started container" PID=2971 containerID=964c05dda5eb2363483268b98612c82c61a6fd27f282c81f883e0ec661b121c6 description=kube-system/coredns-66bc5c9577-wpsrd/coredns id=66d48108-0727-4488-a507-8d31d8ed119c name=/runtime.v1.RuntimeService/StartContainer sandboxID=1186d702839dba662982bb9a40d8845d006831e316122d53c2ecc8be2059ebab
	Oct 10 18:20:13 no-preload-556024 crio[839]: time="2025-10-10T18:20:13.963421153Z" level=info msg="Running pod sandbox: default/busybox/POD" id=c2e7b481-24fe-4a10-bde2-775c60402a81 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 10 18:20:13 no-preload-556024 crio[839]: time="2025-10-10T18:20:13.963554719Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:20:13 no-preload-556024 crio[839]: time="2025-10-10T18:20:13.969992702Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:eee4fc8d16ca9ad7f74a8550b2f93f2b9a87a9f74b56f08c975d1fd4403e6904 UID:b9a243ca-7dc2-4e63-b0f7-7824f64e43f0 NetNS:/var/run/netns/1ad4b029-ebd0-4b7f-af5c-742e4000bc2b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b4a0}] Aliases:map[]}"
	Oct 10 18:20:13 no-preload-556024 crio[839]: time="2025-10-10T18:20:13.970031959Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 10 18:20:13 no-preload-556024 crio[839]: time="2025-10-10T18:20:13.981224688Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:eee4fc8d16ca9ad7f74a8550b2f93f2b9a87a9f74b56f08c975d1fd4403e6904 UID:b9a243ca-7dc2-4e63-b0f7-7824f64e43f0 NetNS:/var/run/netns/1ad4b029-ebd0-4b7f-af5c-742e4000bc2b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b4a0}] Aliases:map[]}"
	Oct 10 18:20:13 no-preload-556024 crio[839]: time="2025-10-10T18:20:13.981392956Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 10 18:20:13 no-preload-556024 crio[839]: time="2025-10-10T18:20:13.982142216Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 10 18:20:13 no-preload-556024 crio[839]: time="2025-10-10T18:20:13.982989317Z" level=info msg="Ran pod sandbox eee4fc8d16ca9ad7f74a8550b2f93f2b9a87a9f74b56f08c975d1fd4403e6904 with infra container: default/busybox/POD" id=c2e7b481-24fe-4a10-bde2-775c60402a81 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 10 18:20:13 no-preload-556024 crio[839]: time="2025-10-10T18:20:13.984363956Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=364db372-27a4-4d45-ba3e-5fd4773c41b4 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:20:13 no-preload-556024 crio[839]: time="2025-10-10T18:20:13.984516743Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=364db372-27a4-4d45-ba3e-5fd4773c41b4 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:20:13 no-preload-556024 crio[839]: time="2025-10-10T18:20:13.984552197Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=364db372-27a4-4d45-ba3e-5fd4773c41b4 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:20:13 no-preload-556024 crio[839]: time="2025-10-10T18:20:13.985105096Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0745482e-d6d4-4a83-891a-aface9faf3e9 name=/runtime.v1.ImageService/PullImage
	Oct 10 18:20:13 no-preload-556024 crio[839]: time="2025-10-10T18:20:13.986717917Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 10 18:20:17 no-preload-556024 crio[839]: time="2025-10-10T18:20:17.49512237Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=0745482e-d6d4-4a83-891a-aface9faf3e9 name=/runtime.v1.ImageService/PullImage
	Oct 10 18:20:17 no-preload-556024 crio[839]: time="2025-10-10T18:20:17.49571039Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e8bc7f44-a246-4561-8fd3-b322fa908d48 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:20:17 no-preload-556024 crio[839]: time="2025-10-10T18:20:17.497040473Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f81aed63-89a0-4096-a38e-55764ff04b72 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:20:17 no-preload-556024 crio[839]: time="2025-10-10T18:20:17.500405337Z" level=info msg="Creating container: default/busybox/busybox" id=fa3894b3-6cd4-4d49-ade3-c19699d89e52 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:20:17 no-preload-556024 crio[839]: time="2025-10-10T18:20:17.501257787Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:20:17 no-preload-556024 crio[839]: time="2025-10-10T18:20:17.504807333Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:20:17 no-preload-556024 crio[839]: time="2025-10-10T18:20:17.50524081Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:20:17 no-preload-556024 crio[839]: time="2025-10-10T18:20:17.535616207Z" level=info msg="Created container c1ba0214634a0d14b73294c2f888f436e16a1405430fa94e46dcf6a53079e618: default/busybox/busybox" id=fa3894b3-6cd4-4d49-ade3-c19699d89e52 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:20:17 no-preload-556024 crio[839]: time="2025-10-10T18:20:17.536346693Z" level=info msg="Starting container: c1ba0214634a0d14b73294c2f888f436e16a1405430fa94e46dcf6a53079e618" id=ad573f1f-de81-4964-99c2-ce26ef358b93 name=/runtime.v1.RuntimeService/StartContainer
	Oct 10 18:20:17 no-preload-556024 crio[839]: time="2025-10-10T18:20:17.538547826Z" level=info msg="Started container" PID=3045 containerID=c1ba0214634a0d14b73294c2f888f436e16a1405430fa94e46dcf6a53079e618 description=default/busybox/busybox id=ad573f1f-de81-4964-99c2-ce26ef358b93 name=/runtime.v1.RuntimeService/StartContainer sandboxID=eee4fc8d16ca9ad7f74a8550b2f93f2b9a87a9f74b56f08c975d1fd4403e6904
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	c1ba0214634a0       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   eee4fc8d16ca9       busybox                                     default
	964c05dda5eb2       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      15 seconds ago      Running             coredns                   0                   1186d702839db       coredns-66bc5c9577-wpsrd                    kube-system
	ce1432bf4d835       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      15 seconds ago      Running             storage-provisioner       0                   d42eb4a0c97dc       storage-provisioner                         kube-system
	442ca456ed4e1       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    26 seconds ago      Running             kindnet-cni               0                   c0308d89abba8       kindnet-wsk6h                               kube-system
	1b9bbd4c8a3ad       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      29 seconds ago      Running             kube-proxy                0                   d104595ba4878       kube-proxy-frchp                            kube-system
	888c3b2bcd0c5       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      40 seconds ago      Running             kube-scheduler            0                   70e4d0391de90       kube-scheduler-no-preload-556024            kube-system
	7557cd4b4f71b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      40 seconds ago      Running             kube-controller-manager   0                   1d0cca595c153       kube-controller-manager-no-preload-556024   kube-system
	8edfcdd595d1e       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      40 seconds ago      Running             kube-apiserver            0                   7d269b2a5777c       kube-apiserver-no-preload-556024            kube-system
	892ddb882b319       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      40 seconds ago      Running             etcd                      0                   68b5a1d20cece       etcd-no-preload-556024                      kube-system
	
	
	==> coredns [964c05dda5eb2363483268b98612c82c61a6fd27f282c81f883e0ec661b121c6] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41420 - 18562 "HINFO IN 8058981065359749574.4395244542903031867. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.03209259s
	
	
	==> describe nodes <==
	Name:               no-preload-556024
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-556024
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad692bf4ab89f0e135b80e730ae25010479ecc46
	                    minikube.k8s.io/name=no-preload-556024
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_10T18_19_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 10 Oct 2025 18:19:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-556024
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 10 Oct 2025 18:20:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 10 Oct 2025 18:20:20 +0000   Fri, 10 Oct 2025 18:19:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 10 Oct 2025 18:20:20 +0000   Fri, 10 Oct 2025 18:19:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 10 Oct 2025 18:20:20 +0000   Fri, 10 Oct 2025 18:19:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 10 Oct 2025 18:20:20 +0000   Fri, 10 Oct 2025 18:20:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-556024
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 6694834041ede3e9eb1b67e168e90e0c
	  System UUID:                5de188e9-37d1-4335-8d19-aac53380f91c
	  Boot ID:                    830c8438-99e6-48ba-b543-66e651cad0c8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-66bc5c9577-wpsrd                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     31s
	  kube-system                 etcd-no-preload-556024                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         36s
	  kube-system                 kindnet-wsk6h                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-no-preload-556024             250m (3%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-no-preload-556024    200m (2%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-frchp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-no-preload-556024             100m (1%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29s                kube-proxy       
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s (x8 over 41s)  kubelet          Node no-preload-556024 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s (x8 over 41s)  kubelet          Node no-preload-556024 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s (x8 over 41s)  kubelet          Node no-preload-556024 status is now: NodeHasSufficientPID
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s                kubelet          Node no-preload-556024 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s                kubelet          Node no-preload-556024 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s                kubelet          Node no-preload-556024 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           32s                node-controller  Node no-preload-556024 event: Registered Node no-preload-556024 in Controller
	  Normal  NodeReady                16s                kubelet          Node no-preload-556024 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 95 0c 3e 92 2e 08 06
	[  +0.052845] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 a5 06 76 2d e3 08 06
	[ +11.354316] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff fa c6 ff 04 55 d6 08 06
	[  +7.101927] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e6 9b 73 27 8c 80 08 06
	[  +0.000350] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 a5 06 76 2d e3 08 06
	[  +6.287191] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 27 2d 28 d6 46 08 06
	[  +0.000293] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa c6 ff 04 55 d6 08 06
	[Oct10 18:19] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 8c 22 f6 6b cf 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e 29 bf 13 20 f9 08 06
	[ +15.511156] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e d6 74 aa 27 d0 08 06
	[  +0.008495] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 af 05 d4 db d1 08 06
	[Oct10 18:20] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 0b 54 33 52 4e 08 06
	[  +0.000597] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 af 05 d4 db d1 08 06
	
	
	==> etcd [892ddb882b31945a6af3c3c730ff71e8dcefa2f10e298d2269a1d6e63fda2d0e] <==
	{"level":"warn","ts":"2025-10-10T18:19:46.970936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:46.977634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:46.984892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:46.991236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:47.000972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:47.014920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:47.021131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:47.028039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:47.035285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:47.041772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:47.049720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:47.070199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:47.076419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:47.083574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:47.090507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:47.098267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:47.105496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:47.111972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:47.119256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:47.125717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:47.133329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:47.146403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:47.153231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:47.163232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:19:47.212970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43240","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:20:26 up  1:02,  0 user,  load average: 4.71, 4.17, 2.66
	Linux no-preload-556024 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [442ca456ed4e1ebcd064572bd787a3cc2b3a2443a610cc8b8b5f70f46e623ef6] <==
	I1010 18:19:59.647919       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1010 18:19:59.648208       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1010 18:19:59.648351       1 main.go:148] setting mtu 1500 for CNI 
	I1010 18:19:59.648366       1 main.go:178] kindnetd IP family: "ipv4"
	I1010 18:19:59.648385       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-10T18:19:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1010 18:19:59.851320       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1010 18:19:59.851381       1 controller.go:381] "Waiting for informer caches to sync"
	I1010 18:19:59.851393       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1010 18:19:59.851576       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1010 18:20:00.252083       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1010 18:20:00.252107       1 metrics.go:72] Registering metrics
	I1010 18:20:00.252164       1 controller.go:711] "Syncing nftables rules"
	I1010 18:20:09.857337       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1010 18:20:09.857404       1 main.go:301] handling current node
	I1010 18:20:19.851152       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1010 18:20:19.851208       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8edfcdd595d1e2e5ec1d69abfb20a2b60ac9adffd7636f4ab977de92abdc3d80] <==
	I1010 18:19:47.714063       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1010 18:19:47.714104       1 cache.go:39] Caches are synced for autoregister controller
	I1010 18:19:47.715775       1 controller.go:667] quota admission added evaluator for: namespaces
	I1010 18:19:47.717519       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1010 18:19:47.721824       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1010 18:19:47.724490       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1010 18:19:47.733248       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1010 18:19:48.618570       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1010 18:19:48.625215       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1010 18:19:48.625234       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1010 18:19:49.108499       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1010 18:19:49.154782       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1010 18:19:49.224078       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1010 18:19:49.231443       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1010 18:19:49.232819       1 controller.go:667] quota admission added evaluator for: endpoints
	I1010 18:19:49.238018       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1010 18:19:49.668431       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1010 18:19:50.497306       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1010 18:19:50.505964       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1010 18:19:50.513791       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1010 18:19:54.671230       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1010 18:19:55.721803       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1010 18:19:55.726090       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1010 18:19:55.819509       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1010 18:20:24.764817       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:58670: use of closed network connection
	
	
	==> kube-controller-manager [7557cd4b4f71b10ac00eceb60b90878c0f98dc90de5b13430e791d4fa103bdb6] <==
	I1010 18:19:54.667269       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1010 18:19:54.667298       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1010 18:19:54.667442       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1010 18:19:54.667670       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1010 18:19:54.668462       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1010 18:19:54.668595       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1010 18:19:54.668618       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1010 18:19:54.668635       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1010 18:19:54.668603       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1010 18:19:54.668640       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1010 18:19:54.668807       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1010 18:19:54.670026       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1010 18:19:54.670047       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1010 18:19:54.670200       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1010 18:19:54.670205       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1010 18:19:54.670308       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-556024"
	I1010 18:19:54.670376       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1010 18:19:54.670967       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1010 18:19:54.671458       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1010 18:19:54.674019       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1010 18:19:54.675421       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1010 18:19:54.678920       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1010 18:19:54.684931       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1010 18:19:54.697844       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1010 18:20:14.674047       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [1b9bbd4c8a3add68e6ba511847248628a24e5d6e4509f8104a612565c4393cfd] <==
	I1010 18:19:56.316949       1 server_linux.go:53] "Using iptables proxy"
	I1010 18:19:56.387617       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1010 18:19:56.488554       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1010 18:19:56.488598       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1010 18:19:56.488708       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1010 18:19:56.509928       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1010 18:19:56.510015       1 server_linux.go:132] "Using iptables Proxier"
	I1010 18:19:56.516258       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1010 18:19:56.516703       1 server.go:527] "Version info" version="v1.34.1"
	I1010 18:19:56.516746       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 18:19:56.518841       1 config.go:200] "Starting service config controller"
	I1010 18:19:56.518875       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1010 18:19:56.519543       1 config.go:309] "Starting node config controller"
	I1010 18:19:56.520206       1 config.go:106] "Starting endpoint slice config controller"
	I1010 18:19:56.520231       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1010 18:19:56.519598       1 config.go:403] "Starting serviceCIDR config controller"
	I1010 18:19:56.520398       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1010 18:19:56.520350       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1010 18:19:56.520495       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1010 18:19:56.619160       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1010 18:19:56.620929       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1010 18:19:56.620938       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [888c3b2bcd0c5650f9c88057ceeddd0d13c45f459997d16dfd09dd5957a0f6cc] <==
	I1010 18:19:48.197432       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 18:19:48.199379       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1010 18:19:48.199424       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1010 18:19:48.199770       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1010 18:19:48.199789       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1010 18:19:48.201218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1010 18:19:48.201793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1010 18:19:48.202286       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1010 18:19:48.204724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1010 18:19:48.204861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1010 18:19:48.206227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1010 18:19:48.206337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1010 18:19:48.206548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1010 18:19:48.206631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1010 18:19:48.206709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1010 18:19:48.206831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1010 18:19:48.207000       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1010 18:19:48.207306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1010 18:19:48.207527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1010 18:19:48.207529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1010 18:19:48.207654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1010 18:19:48.207656       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1010 18:19:48.207700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1010 18:19:48.207882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1010 18:19:49.499946       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 10 18:19:51 no-preload-556024 kubelet[2356]: E1010 18:19:51.368090    2356 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-no-preload-556024\" already exists" pod="kube-system/kube-apiserver-no-preload-556024"
	Oct 10 18:19:51 no-preload-556024 kubelet[2356]: E1010 18:19:51.368205    2356 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-no-preload-556024\" already exists" pod="kube-system/kube-scheduler-no-preload-556024"
	Oct 10 18:19:51 no-preload-556024 kubelet[2356]: E1010 18:19:51.368223    2356 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-no-preload-556024\" already exists" pod="kube-system/etcd-no-preload-556024"
	Oct 10 18:19:51 no-preload-556024 kubelet[2356]: I1010 18:19:51.378401    2356 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-556024" podStartSLOduration=1.378380306 podStartE2EDuration="1.378380306s" podCreationTimestamp="2025-10-10 18:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:19:51.365767335 +0000 UTC m=+1.128619069" watchObservedRunningTime="2025-10-10 18:19:51.378380306 +0000 UTC m=+1.141232040"
	Oct 10 18:19:54 no-preload-556024 kubelet[2356]: I1010 18:19:54.755275    2356 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 10 18:19:54 no-preload-556024 kubelet[2356]: I1010 18:19:54.755929    2356 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 10 18:19:55 no-preload-556024 kubelet[2356]: I1010 18:19:55.947296    2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3457ebf4-7608-4c78-b8dc-3a92a2fb32ae-xtables-lock\") pod \"kube-proxy-frchp\" (UID: \"3457ebf4-7608-4c78-b8dc-3a92a2fb32ae\") " pod="kube-system/kube-proxy-frchp"
	Oct 10 18:19:55 no-preload-556024 kubelet[2356]: I1010 18:19:55.947340    2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3457ebf4-7608-4c78-b8dc-3a92a2fb32ae-kube-proxy\") pod \"kube-proxy-frchp\" (UID: \"3457ebf4-7608-4c78-b8dc-3a92a2fb32ae\") " pod="kube-system/kube-proxy-frchp"
	Oct 10 18:19:55 no-preload-556024 kubelet[2356]: I1010 18:19:55.947358    2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71384861-5289-4d2b-8d62-b7d2c27d86b8-lib-modules\") pod \"kindnet-wsk6h\" (UID: \"71384861-5289-4d2b-8d62-b7d2c27d86b8\") " pod="kube-system/kindnet-wsk6h"
	Oct 10 18:19:55 no-preload-556024 kubelet[2356]: I1010 18:19:55.947374    2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq2bb\" (UniqueName: \"kubernetes.io/projected/3457ebf4-7608-4c78-b8dc-3a92a2fb32ae-kube-api-access-lq2bb\") pod \"kube-proxy-frchp\" (UID: \"3457ebf4-7608-4c78-b8dc-3a92a2fb32ae\") " pod="kube-system/kube-proxy-frchp"
	Oct 10 18:19:55 no-preload-556024 kubelet[2356]: I1010 18:19:55.947442    2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71384861-5289-4d2b-8d62-b7d2c27d86b8-xtables-lock\") pod \"kindnet-wsk6h\" (UID: \"71384861-5289-4d2b-8d62-b7d2c27d86b8\") " pod="kube-system/kindnet-wsk6h"
	Oct 10 18:19:55 no-preload-556024 kubelet[2356]: I1010 18:19:55.947495    2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn7js\" (UniqueName: \"kubernetes.io/projected/71384861-5289-4d2b-8d62-b7d2c27d86b8-kube-api-access-jn7js\") pod \"kindnet-wsk6h\" (UID: \"71384861-5289-4d2b-8d62-b7d2c27d86b8\") " pod="kube-system/kindnet-wsk6h"
	Oct 10 18:19:55 no-preload-556024 kubelet[2356]: I1010 18:19:55.947531    2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3457ebf4-7608-4c78-b8dc-3a92a2fb32ae-lib-modules\") pod \"kube-proxy-frchp\" (UID: \"3457ebf4-7608-4c78-b8dc-3a92a2fb32ae\") " pod="kube-system/kube-proxy-frchp"
	Oct 10 18:19:55 no-preload-556024 kubelet[2356]: I1010 18:19:55.947554    2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/71384861-5289-4d2b-8d62-b7d2c27d86b8-cni-cfg\") pod \"kindnet-wsk6h\" (UID: \"71384861-5289-4d2b-8d62-b7d2c27d86b8\") " pod="kube-system/kindnet-wsk6h"
	Oct 10 18:19:56 no-preload-556024 kubelet[2356]: I1010 18:19:56.399359    2356 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-frchp" podStartSLOduration=1.399323416 podStartE2EDuration="1.399323416s" podCreationTimestamp="2025-10-10 18:19:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:19:56.398325324 +0000 UTC m=+6.161177255" watchObservedRunningTime="2025-10-10 18:19:56.399323416 +0000 UTC m=+6.162175149"
	Oct 10 18:20:00 no-preload-556024 kubelet[2356]: I1010 18:20:00.396046    2356 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-wsk6h" podStartSLOduration=2.161140405 podStartE2EDuration="5.396022333s" podCreationTimestamp="2025-10-10 18:19:55 +0000 UTC" firstStartedPulling="2025-10-10 18:19:56.168230064 +0000 UTC m=+5.931081778" lastFinishedPulling="2025-10-10 18:19:59.403111991 +0000 UTC m=+9.165963706" observedRunningTime="2025-10-10 18:20:00.395897213 +0000 UTC m=+10.158748967" watchObservedRunningTime="2025-10-10 18:20:00.396022333 +0000 UTC m=+10.158874067"
	Oct 10 18:20:10 no-preload-556024 kubelet[2356]: I1010 18:20:10.198788    2356 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 10 18:20:10 no-preload-556024 kubelet[2356]: I1010 18:20:10.250855    2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z8xp\" (UniqueName: \"kubernetes.io/projected/42a21c5e-4318-43f7-8d2a-dc62676b17c2-kube-api-access-6z8xp\") pod \"storage-provisioner\" (UID: \"42a21c5e-4318-43f7-8d2a-dc62676b17c2\") " pod="kube-system/storage-provisioner"
	Oct 10 18:20:10 no-preload-556024 kubelet[2356]: I1010 18:20:10.250910    2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbvrn\" (UniqueName: \"kubernetes.io/projected/316be091-2de7-417c-b44b-1d26108e3ed3-kube-api-access-dbvrn\") pod \"coredns-66bc5c9577-wpsrd\" (UID: \"316be091-2de7-417c-b44b-1d26108e3ed3\") " pod="kube-system/coredns-66bc5c9577-wpsrd"
	Oct 10 18:20:10 no-preload-556024 kubelet[2356]: I1010 18:20:10.250982    2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/42a21c5e-4318-43f7-8d2a-dc62676b17c2-tmp\") pod \"storage-provisioner\" (UID: \"42a21c5e-4318-43f7-8d2a-dc62676b17c2\") " pod="kube-system/storage-provisioner"
	Oct 10 18:20:10 no-preload-556024 kubelet[2356]: I1010 18:20:10.251036    2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/316be091-2de7-417c-b44b-1d26108e3ed3-config-volume\") pod \"coredns-66bc5c9577-wpsrd\" (UID: \"316be091-2de7-417c-b44b-1d26108e3ed3\") " pod="kube-system/coredns-66bc5c9577-wpsrd"
	Oct 10 18:20:11 no-preload-556024 kubelet[2356]: I1010 18:20:11.421279    2356 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.421258501 podStartE2EDuration="15.421258501s" podCreationTimestamp="2025-10-10 18:19:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:20:11.421036429 +0000 UTC m=+21.183888162" watchObservedRunningTime="2025-10-10 18:20:11.421258501 +0000 UTC m=+21.184110251"
	Oct 10 18:20:11 no-preload-556024 kubelet[2356]: I1010 18:20:11.430992    2356 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wpsrd" podStartSLOduration=16.430971061 podStartE2EDuration="16.430971061s" podCreationTimestamp="2025-10-10 18:19:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:20:11.430726061 +0000 UTC m=+21.193577795" watchObservedRunningTime="2025-10-10 18:20:11.430971061 +0000 UTC m=+21.193822795"
	Oct 10 18:20:13 no-preload-556024 kubelet[2356]: I1010 18:20:13.772404    2356 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvf9p\" (UniqueName: \"kubernetes.io/projected/b9a243ca-7dc2-4e63-b0f7-7824f64e43f0-kube-api-access-kvf9p\") pod \"busybox\" (UID: \"b9a243ca-7dc2-4e63-b0f7-7824f64e43f0\") " pod="default/busybox"
	Oct 10 18:20:24 no-preload-556024 kubelet[2356]: E1010 18:20:24.764702    2356 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:35080->127.0.0.1:34011: write tcp 127.0.0.1:35080->127.0.0.1:34011: write: broken pipe
	
	
	==> storage-provisioner [ce1432bf4d835a37a125e2273c0bd5585e212016b1aa471c867dfe3fb8d9f006] <==
	I1010 18:20:10.581401       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1010 18:20:10.589218       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1010 18:20:10.589271       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1010 18:20:10.591600       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:20:10.596172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1010 18:20:10.596349       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1010 18:20:10.596394       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"239ef5d2-e469-4829-842f-94522e30a190", APIVersion:"v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-556024_f78ef508-d6ff-4e10-90c2-f190a5e59cce became leader
	I1010 18:20:10.596505       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-556024_f78ef508-d6ff-4e10-90c2-f190a5e59cce!
	W1010 18:20:10.598698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:20:10.601827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1010 18:20:10.696758       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-556024_f78ef508-d6ff-4e10-90c2-f190a5e59cce!
	W1010 18:20:12.605231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:20:12.609499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:20:14.613017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:20:14.617574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:20:16.620666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:20:16.626160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:20:18.629978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:20:18.635494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:20:20.639479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:20:20.643897       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:20:22.647873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:20:22.652683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:20:24.656600       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:20:24.661256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-556024 -n no-preload-556024
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-556024 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-821769 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-821769 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (281.36389ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:21:15Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-821769 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-821769 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-821769 describe deploy/metrics-server -n kube-system: exit status 1 (60.298387ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-821769 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-821769
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-821769:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "92545ee0c99825b76f4f4b9fc8a4b4ba2aa46e2125731312ddee69c03ebd0166",
	        "Created": "2025-10-10T18:20:31.085915858Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 312217,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-10T18:20:31.123163193Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:84da1fc78d37190122f56c520913b0bfc454516bc5fdbdc209e2a5258afce8c3",
	        "ResolvConfPath": "/var/lib/docker/containers/92545ee0c99825b76f4f4b9fc8a4b4ba2aa46e2125731312ddee69c03ebd0166/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/92545ee0c99825b76f4f4b9fc8a4b4ba2aa46e2125731312ddee69c03ebd0166/hostname",
	        "HostsPath": "/var/lib/docker/containers/92545ee0c99825b76f4f4b9fc8a4b4ba2aa46e2125731312ddee69c03ebd0166/hosts",
	        "LogPath": "/var/lib/docker/containers/92545ee0c99825b76f4f4b9fc8a4b4ba2aa46e2125731312ddee69c03ebd0166/92545ee0c99825b76f4f4b9fc8a4b4ba2aa46e2125731312ddee69c03ebd0166-json.log",
	        "Name": "/default-k8s-diff-port-821769",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-821769:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-821769",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "92545ee0c99825b76f4f4b9fc8a4b4ba2aa46e2125731312ddee69c03ebd0166",
	                "LowerDir": "/var/lib/docker/overlay2/66bee21f5730501e8e73927b89befd253dad8df05381d41144b0a046ca5a7701-init/diff:/var/lib/docker/overlay2/9995a0af7efc4d83e8e62526a6cf13ffc5df3bab5cee59077c863040f7e3e58d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/66bee21f5730501e8e73927b89befd253dad8df05381d41144b0a046ca5a7701/merged",
	                "UpperDir": "/var/lib/docker/overlay2/66bee21f5730501e8e73927b89befd253dad8df05381d41144b0a046ca5a7701/diff",
	                "WorkDir": "/var/lib/docker/overlay2/66bee21f5730501e8e73927b89befd253dad8df05381d41144b0a046ca5a7701/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-821769",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-821769/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-821769",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-821769",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-821769",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e59901d0d3c36bb1e25e39b7387741f854f1912cf71822fac1707bdb17a89ee6",
	            "SandboxKey": "/var/run/docker/netns/e59901d0d3c3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-821769": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:b5:f1:cb:0e:6c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "24e5e8e22680fb22d88f869caaf5ecac6707c168b04786cc68232728a1674899",
	                    "EndpointID": "3c0a4b9370e98f8eb400d718d17458a8092fb46edf3f5c7ce536dfbfdfd2d432",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-821769",
	                        "92545ee0c998"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-821769 -n default-k8s-diff-port-821769
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-821769 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-821769 logs -n 25: (1.003320666s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-078032 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ ssh     │ -p bridge-078032 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ addons  │ enable metrics-server -p embed-certs-472518 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ ssh     │ -p bridge-078032 sudo containerd config dump                                                                                                                                                                                                  │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo crio config                                                                                                                                                                                                             │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ delete  │ -p bridge-078032                                                                                                                                                                                                                              │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-141193 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p old-k8s-version-141193 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:21 UTC │
	│ stop    │ -p embed-certs-472518 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ addons  │ enable metrics-server -p no-preload-556024 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ delete  │ -p disable-driver-mounts-523797                                                                                                                                                                                                               │ disable-driver-mounts-523797 │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p default-k8s-diff-port-821769 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:21 UTC │
	│ stop    │ -p no-preload-556024 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ addons  │ enable dashboard -p embed-certs-472518 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p embed-certs-472518 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-556024 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p no-preload-556024 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-821769 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/10 18:20:43
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 18:20:43.446366  316039 out.go:360] Setting OutFile to fd 1 ...
	I1010 18:20:43.446643  316039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:20:43.446652  316039 out.go:374] Setting ErrFile to fd 2...
	I1010 18:20:43.446657  316039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:20:43.446905  316039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 18:20:43.447426  316039 out.go:368] Setting JSON to false
	I1010 18:20:43.448597  316039 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3783,"bootTime":1760116660,"procs":320,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 18:20:43.448694  316039 start.go:141] virtualization: kvm guest
	I1010 18:20:43.451659  316039 out.go:179] * [no-preload-556024] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1010 18:20:43.455280  316039 out.go:179]   - MINIKUBE_LOCATION=21724
	I1010 18:20:43.455310  316039 notify.go:220] Checking for updates...
	I1010 18:20:43.457194  316039 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 18:20:43.458229  316039 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:20:43.459338  316039 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-5815/.minikube
	I1010 18:20:43.460374  316039 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 18:20:43.461326  316039 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 18:20:43.462916  316039 config.go:182] Loaded profile config "no-preload-556024": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:20:43.463671  316039 driver.go:421] Setting default libvirt URI to qemu:///system
	I1010 18:20:43.494145  316039 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1010 18:20:43.494327  316039 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:20:43.575548  316039 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:79 SystemTime:2025-10-10 18:20:43.559967778 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:20:43.575688  316039 docker.go:318] overlay module found
	I1010 18:20:43.578025  316039 out.go:179] * Using the docker driver based on existing profile
	I1010 18:20:43.579242  316039 start.go:305] selected driver: docker
	I1010 18:20:43.579261  316039 start.go:925] validating driver "docker" against &{Name:no-preload-556024 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-556024 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:20:43.579415  316039 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 18:20:43.580194  316039 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:20:43.653363  316039 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:79 SystemTime:2025-10-10 18:20:43.64191346 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:20:43.653670  316039 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:20:43.653698  316039 cni.go:84] Creating CNI manager for ""
	I1010 18:20:43.653755  316039 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:20:43.653825  316039 start.go:349] cluster config:
	{Name:no-preload-556024 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-556024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:20:43.659998  316039 out.go:179] * Starting "no-preload-556024" primary control-plane node in "no-preload-556024" cluster
	I1010 18:20:43.661318  316039 cache.go:123] Beginning downloading kic base image for docker with crio
	I1010 18:20:43.662567  316039 out.go:179] * Pulling base image v0.0.48-1760103811-21724 ...
	I1010 18:20:43.663594  316039 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:20:43.663673  316039 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon
	I1010 18:20:43.663749  316039 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/config.json ...
	I1010 18:20:43.664143  316039 cache.go:107] acquiring lock: {Name:mkdface014b0b0c18e2529a8fc2cf742979f5f8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:20:43.664226  316039 cache.go:107] acquiring lock: {Name:mkd574c74807a65d6c1e08f0a6d292773ee4d51a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:20:43.664257  316039 cache.go:115] /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1010 18:20:43.664286  316039 cache.go:115] /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1010 18:20:43.664290  316039 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 149.274µs
	I1010 18:20:43.664294  316039 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 71.383µs
	I1010 18:20:43.664309  316039 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1010 18:20:43.664309  316039 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1010 18:20:43.664330  316039 cache.go:107] acquiring lock: {Name:mk6c1abc09453f5583a50c7348563cf680f08172 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:20:43.664353  316039 cache.go:107] acquiring lock: {Name:mk8a6cf34543e68ad996fdd3dfcc536ed23f13a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:20:43.664378  316039 cache.go:115] /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1010 18:20:43.664386  316039 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 58.29µs
	I1010 18:20:43.664398  316039 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1010 18:20:43.664401  316039 cache.go:115] /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1010 18:20:43.664414  316039 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 62.339µs
	I1010 18:20:43.664412  316039 cache.go:107] acquiring lock: {Name:mk589006dd1715c9cef02bfeb051e2a5fdd82d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:20:43.664423  316039 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1010 18:20:43.664435  316039 cache.go:107] acquiring lock: {Name:mk346c7b9277054f446ecd193d09cac2f17a13f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:20:43.664474  316039 cache.go:115] /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1010 18:20:43.664330  316039 cache.go:107] acquiring lock: {Name:mk43600d297347b2bd1ef8f04fef87e9e24d614a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:20:43.664560  316039 cache.go:115] /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1010 18:20:43.664579  316039 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 251.287µs
	I1010 18:20:43.664587  316039 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1010 18:20:43.664447  316039 cache.go:115] /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1010 18:20:43.664606  316039 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 195.132µs
	I1010 18:20:43.664619  316039 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1010 18:20:43.664143  316039 cache.go:107] acquiring lock: {Name:mk4f454812d4444d82ff12e1c427c98a877e5e2f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:20:43.664653  316039 cache.go:115] /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1010 18:20:43.664663  316039 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 550.696µs
	I1010 18:20:43.664673  316039 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1010 18:20:43.664483  316039 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 49.234µs
	I1010 18:20:43.664681  316039 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1010 18:20:43.664688  316039 cache.go:87] Successfully saved all images to host disk.
	I1010 18:20:43.689240  316039 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon, skipping pull
	I1010 18:20:43.689261  316039 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 exists in daemon, skipping load
	I1010 18:20:43.689283  316039 cache.go:232] Successfully downloaded all kic artifacts
	I1010 18:20:43.689321  316039 start.go:360] acquireMachinesLock for no-preload-556024: {Name:mk3ff552b11677088d4385d2ba43c142109fcf3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:20:43.689401  316039 start.go:364] duration metric: took 59.53µs to acquireMachinesLock for "no-preload-556024"
	I1010 18:20:43.689425  316039 start.go:96] Skipping create...Using existing machine configuration
	I1010 18:20:43.689435  316039 fix.go:54] fixHost starting: 
	I1010 18:20:43.689696  316039 cli_runner.go:164] Run: docker container inspect no-preload-556024 --format={{.State.Status}}
	I1010 18:20:43.716175  316039 fix.go:112] recreateIfNeeded on no-preload-556024: state=Stopped err=<nil>
	W1010 18:20:43.716210  316039 fix.go:138] unexpected machine state, will restart: <nil>
	W1010 18:20:39.530761  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	W1010 18:20:41.532918  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	W1010 18:20:43.534100  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	I1010 18:20:41.340446  310776 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 18:20:41.340555  310776 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 18:20:42.841252  310776 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500922644s
	I1010 18:20:42.844237  310776 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1010 18:20:42.844348  310776 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8444/livez
	I1010 18:20:42.844433  310776 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1010 18:20:42.844518  310776 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1010 18:20:44.598226  310776 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.753880491s
	I1010 18:20:45.121438  310776 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.277113545s
	I1010 18:20:46.346293  310776 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.502033618s
	I1010 18:20:46.357281  310776 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 18:20:46.366479  310776 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 18:20:46.375532  310776 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 18:20:46.375817  310776 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-821769 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 18:20:46.384299  310776 kubeadm.go:318] [bootstrap-token] Using token: gwvnud.yj4fhfjb9apke821
	I1010 18:20:42.077576  315243 out.go:252] * Restarting existing docker container for "embed-certs-472518" ...
	I1010 18:20:42.077652  315243 cli_runner.go:164] Run: docker start embed-certs-472518
	I1010 18:20:42.324899  315243 cli_runner.go:164] Run: docker container inspect embed-certs-472518 --format={{.State.Status}}
	I1010 18:20:42.344432  315243 kic.go:430] container "embed-certs-472518" state is running.
	I1010 18:20:42.344870  315243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-472518
	I1010 18:20:42.364868  315243 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/config.json ...
	I1010 18:20:42.365194  315243 machine.go:93] provisionDockerMachine start ...
	I1010 18:20:42.365274  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:42.384498  315243 main.go:141] libmachine: Using SSH client type: native
	I1010 18:20:42.384729  315243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1010 18:20:42.384743  315243 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 18:20:42.385417  315243 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54750->127.0.0.1:33113: read: connection reset by peer
	I1010 18:20:45.520224  315243 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-472518
	
	I1010 18:20:45.520254  315243 ubuntu.go:182] provisioning hostname "embed-certs-472518"
	I1010 18:20:45.520313  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:45.539008  315243 main.go:141] libmachine: Using SSH client type: native
	I1010 18:20:45.539308  315243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1010 18:20:45.539325  315243 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-472518 && echo "embed-certs-472518" | sudo tee /etc/hostname
	I1010 18:20:45.697980  315243 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-472518
	
	I1010 18:20:45.698066  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:45.719981  315243 main.go:141] libmachine: Using SSH client type: native
	I1010 18:20:45.720234  315243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1010 18:20:45.720267  315243 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-472518' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-472518/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-472518' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:20:45.864595  315243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:20:45.864632  315243 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-5815/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-5815/.minikube}
	I1010 18:20:45.864669  315243 ubuntu.go:190] setting up certificates
	I1010 18:20:45.864681  315243 provision.go:84] configureAuth start
	I1010 18:20:45.864752  315243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-472518
	I1010 18:20:45.886254  315243 provision.go:143] copyHostCerts
	I1010 18:20:45.886322  315243 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem, removing ...
	I1010 18:20:45.886336  315243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem
	I1010 18:20:45.886413  315243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem (1675 bytes)
	I1010 18:20:45.886551  315243 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem, removing ...
	I1010 18:20:45.886565  315243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem
	I1010 18:20:45.886615  315243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem (1082 bytes)
	I1010 18:20:45.886698  315243 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem, removing ...
	I1010 18:20:45.886709  315243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem
	I1010 18:20:45.886745  315243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem (1123 bytes)
	I1010 18:20:45.886812  315243 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem org=jenkins.embed-certs-472518 san=[127.0.0.1 192.168.94.2 embed-certs-472518 localhost minikube]
	I1010 18:20:46.271763  315243 provision.go:177] copyRemoteCerts
	I1010 18:20:46.271823  315243 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:20:46.271855  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:46.291521  315243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:20:46.392626  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1010 18:20:46.415271  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1010 18:20:46.434707  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 18:20:46.454219  315243 provision.go:87] duration metric: took 589.52001ms to configureAuth
	I1010 18:20:46.454244  315243 ubuntu.go:206] setting minikube options for container-runtime
	I1010 18:20:46.454427  315243 config.go:182] Loaded profile config "embed-certs-472518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:20:46.454546  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:46.473500  315243 main.go:141] libmachine: Using SSH client type: native
	I1010 18:20:46.473704  315243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1010 18:20:46.473721  315243 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:20:46.789031  315243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:20:46.789116  315243 machine.go:96] duration metric: took 4.423902548s to provisionDockerMachine
	I1010 18:20:46.789130  315243 start.go:293] postStartSetup for "embed-certs-472518" (driver="docker")
	I1010 18:20:46.789143  315243 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:20:46.789210  315243 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:20:46.789258  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:46.815152  315243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:20:46.385437  310776 out.go:252]   - Configuring RBAC rules ...
	I1010 18:20:46.385588  310776 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 18:20:46.389824  310776 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 18:20:46.394691  310776 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 18:20:46.397355  310776 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 18:20:46.399852  310776 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 18:20:46.402418  310776 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 18:20:46.752330  310776 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 18:20:47.169598  310776 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1010 18:20:47.752782  310776 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1010 18:20:47.754001  310776 kubeadm.go:318] 
	I1010 18:20:47.754109  310776 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1010 18:20:47.754123  310776 kubeadm.go:318] 
	I1010 18:20:47.754232  310776 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1010 18:20:47.754244  310776 kubeadm.go:318] 
	I1010 18:20:47.754289  310776 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1010 18:20:47.754398  310776 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 18:20:47.754483  310776 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 18:20:47.754492  310776 kubeadm.go:318] 
	I1010 18:20:47.754572  310776 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1010 18:20:47.754589  310776 kubeadm.go:318] 
	I1010 18:20:47.754658  310776 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 18:20:47.754668  310776 kubeadm.go:318] 
	I1010 18:20:47.754745  310776 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1010 18:20:47.754863  310776 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 18:20:47.754965  310776 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 18:20:47.755000  310776 kubeadm.go:318] 
	I1010 18:20:47.755138  310776 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 18:20:47.755249  310776 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1010 18:20:47.755261  310776 kubeadm.go:318] 
	I1010 18:20:47.755379  310776 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token gwvnud.yj4fhfjb9apke821 \
	I1010 18:20:47.755581  310776 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:08dcb68c3233bd2646103f50182dc3a0cc6156f6b69cb66c341f613324bcc71f \
	I1010 18:20:47.755622  310776 kubeadm.go:318] 	--control-plane 
	I1010 18:20:47.755633  310776 kubeadm.go:318] 
	I1010 18:20:47.755764  310776 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1010 18:20:47.755779  310776 kubeadm.go:318] 
	I1010 18:20:47.755902  310776 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token gwvnud.yj4fhfjb9apke821 \
	I1010 18:20:47.756083  310776 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:08dcb68c3233bd2646103f50182dc3a0cc6156f6b69cb66c341f613324bcc71f 
	I1010 18:20:47.759459  310776 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1010 18:20:47.759612  310776 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 18:20:47.759649  310776 cni.go:84] Creating CNI manager for ""
	I1010 18:20:47.759660  310776 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:20:47.761460  310776 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1010 18:20:46.914251  315243 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:20:46.918720  315243 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1010 18:20:46.918754  315243 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1010 18:20:46.918767  315243 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/addons for local assets ...
	I1010 18:20:46.918823  315243 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/files for local assets ...
	I1010 18:20:46.918934  315243 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem -> 93542.pem in /etc/ssl/certs
	I1010 18:20:46.919076  315243 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:20:46.928469  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:20:46.951615  315243 start.go:296] duration metric: took 162.458821ms for postStartSetup
	I1010 18:20:46.951700  315243 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1010 18:20:46.951744  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:46.972432  315243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:20:47.076364  315243 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1010 18:20:47.081264  315243 fix.go:56] duration metric: took 5.026311661s for fixHost
	I1010 18:20:47.081299  315243 start.go:83] releasing machines lock for "embed-certs-472518", held for 5.026378467s
	I1010 18:20:47.081380  315243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-472518
	I1010 18:20:47.100059  315243 ssh_runner.go:195] Run: cat /version.json
	I1010 18:20:47.100111  315243 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:20:47.100122  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:47.100174  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:47.122805  315243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:20:47.124141  315243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:20:47.227891  315243 ssh_runner.go:195] Run: systemctl --version
	I1010 18:20:47.299889  315243 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:20:47.336545  315243 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:20:47.341187  315243 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:20:47.341242  315243 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:20:47.350350  315243 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1010 18:20:47.350370  315243 start.go:495] detecting cgroup driver to use...
	I1010 18:20:47.350396  315243 detect.go:190] detected "systemd" cgroup driver on host os
	I1010 18:20:47.350445  315243 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:20:47.365413  315243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:20:47.379380  315243 docker.go:218] disabling cri-docker service (if available) ...
	I1010 18:20:47.379437  315243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:20:47.395098  315243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:20:47.409632  315243 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:20:47.495438  315243 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:20:47.584238  315243 docker.go:234] disabling docker service ...
	I1010 18:20:47.584305  315243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:20:47.600224  315243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:20:47.614516  315243 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:20:47.704010  315243 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:20:47.792697  315243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:20:47.808011  315243 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:20:47.826927  315243 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1010 18:20:47.826983  315243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:47.837633  315243 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1010 18:20:47.837698  315243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:47.848119  315243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:47.859624  315243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:47.870939  315243 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:20:47.882141  315243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:47.894494  315243 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:47.906184  315243 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:47.916671  315243 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:20:47.924923  315243 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:20:47.934175  315243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:20:48.032532  315243 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:20:48.213272  315243 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:20:48.213343  315243 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:20:48.217822  315243 start.go:563] Will wait 60s for crictl version
	I1010 18:20:48.217887  315243 ssh_runner.go:195] Run: which crictl
	I1010 18:20:48.221636  315243 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1010 18:20:48.247933  315243 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1010 18:20:48.248044  315243 ssh_runner.go:195] Run: crio --version
	I1010 18:20:48.280438  315243 ssh_runner.go:195] Run: crio --version
	I1010 18:20:48.313602  315243 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1010 18:20:43.718100  316039 out.go:252] * Restarting existing docker container for "no-preload-556024" ...
	I1010 18:20:43.718195  316039 cli_runner.go:164] Run: docker start no-preload-556024
	I1010 18:20:44.003543  316039 cli_runner.go:164] Run: docker container inspect no-preload-556024 --format={{.State.Status}}
	I1010 18:20:44.025442  316039 kic.go:430] container "no-preload-556024" state is running.
	I1010 18:20:44.025897  316039 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-556024
	I1010 18:20:44.048338  316039 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/config.json ...
	I1010 18:20:44.048652  316039 machine.go:93] provisionDockerMachine start ...
	I1010 18:20:44.048722  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:20:44.071078  316039 main.go:141] libmachine: Using SSH client type: native
	I1010 18:20:44.071356  316039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1010 18:20:44.071373  316039 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 18:20:44.071958  316039 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50844->127.0.0.1:33118: read: connection reset by peer
	I1010 18:20:47.219988  316039 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-556024
	
	I1010 18:20:47.220017  316039 ubuntu.go:182] provisioning hostname "no-preload-556024"
	I1010 18:20:47.220124  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:20:47.240083  316039 main.go:141] libmachine: Using SSH client type: native
	I1010 18:20:47.240315  316039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1010 18:20:47.240331  316039 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-556024 && echo "no-preload-556024" | sudo tee /etc/hostname
	I1010 18:20:47.384842  316039 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-556024
	
	I1010 18:20:47.384916  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:20:47.403676  316039 main.go:141] libmachine: Using SSH client type: native
	I1010 18:20:47.403883  316039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1010 18:20:47.403900  316039 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-556024' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-556024/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-556024' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:20:47.541827  316039 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:20:47.541854  316039 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-5815/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-5815/.minikube}
	I1010 18:20:47.541874  316039 ubuntu.go:190] setting up certificates
	I1010 18:20:47.541882  316039 provision.go:84] configureAuth start
	I1010 18:20:47.541927  316039 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-556024
	I1010 18:20:47.561676  316039 provision.go:143] copyHostCerts
	I1010 18:20:47.561736  316039 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem, removing ...
	I1010 18:20:47.561750  316039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem
	I1010 18:20:47.561822  316039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem (1082 bytes)
	I1010 18:20:47.561945  316039 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem, removing ...
	I1010 18:20:47.561957  316039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem
	I1010 18:20:47.561985  316039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem (1123 bytes)
	I1010 18:20:47.562088  316039 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem, removing ...
	I1010 18:20:47.562100  316039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem
	I1010 18:20:47.562130  316039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem (1675 bytes)
	I1010 18:20:47.562203  316039 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem org=jenkins.no-preload-556024 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-556024]
	I1010 18:20:47.678388  316039 provision.go:177] copyRemoteCerts
	I1010 18:20:47.678453  316039 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:20:47.678494  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:20:47.696871  316039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/no-preload-556024/id_rsa Username:docker}
	I1010 18:20:47.801868  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1010 18:20:47.826483  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1010 18:20:47.849207  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 18:20:47.872413  316039 provision.go:87] duration metric: took 330.51941ms to configureAuth
	I1010 18:20:47.872443  316039 ubuntu.go:206] setting minikube options for container-runtime
	I1010 18:20:47.872620  316039 config.go:182] Loaded profile config "no-preload-556024": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:20:47.872755  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:20:47.895966  316039 main.go:141] libmachine: Using SSH client type: native
	I1010 18:20:47.896218  316039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1010 18:20:47.896242  316039 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:20:48.278422  316039 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:20:48.278452  316039 machine.go:96] duration metric: took 4.229784935s to provisionDockerMachine
	I1010 18:20:48.278468  316039 start.go:293] postStartSetup for "no-preload-556024" (driver="docker")
	I1010 18:20:48.278483  316039 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:20:48.278552  316039 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:20:48.278614  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:20:48.299387  316039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/no-preload-556024/id_rsa Username:docker}
	I1010 18:20:48.409396  316039 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:20:48.413415  316039 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1010 18:20:48.413447  316039 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1010 18:20:48.413459  316039 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/addons for local assets ...
	I1010 18:20:48.413503  316039 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/files for local assets ...
	I1010 18:20:48.413586  316039 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem -> 93542.pem in /etc/ssl/certs
	I1010 18:20:48.413677  316039 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:20:48.423085  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:20:48.445117  316039 start.go:296] duration metric: took 166.633308ms for postStartSetup
	I1010 18:20:48.445191  316039 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1010 18:20:48.445225  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	W1010 18:20:46.032712  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	W1010 18:20:48.033716  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	I1010 18:20:48.317208  315243 cli_runner.go:164] Run: docker network inspect embed-certs-472518 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 18:20:48.336738  315243 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1010 18:20:48.344444  315243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:20:48.359751  315243 kubeadm.go:883] updating cluster {Name:embed-certs-472518 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-472518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 18:20:48.359866  315243 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:20:48.359903  315243 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:20:48.394787  315243 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:20:48.394808  315243 crio.go:433] Images already preloaded, skipping extraction
	I1010 18:20:48.394850  315243 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:20:48.422591  315243 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:20:48.422611  315243 cache_images.go:85] Images are preloaded, skipping loading
	I1010 18:20:48.422618  315243 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1010 18:20:48.422707  315243 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-472518 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-472518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:20:48.422772  315243 ssh_runner.go:195] Run: crio config
	I1010 18:20:48.471617  315243 cni.go:84] Creating CNI manager for ""
	I1010 18:20:48.471643  315243 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:20:48.471662  315243 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1010 18:20:48.471692  315243 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-472518 NodeName:embed-certs-472518 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 18:20:48.471834  315243 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-472518"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 18:20:48.471900  315243 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1010 18:20:48.482685  315243 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 18:20:48.482762  315243 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 18:20:48.492297  315243 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1010 18:20:48.507309  315243 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:20:48.521884  315243 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1010 18:20:48.537302  315243 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1010 18:20:48.541606  315243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:20:48.552248  315243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:20:48.648834  315243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:20:48.671702  315243 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518 for IP: 192.168.94.2
	I1010 18:20:48.671724  315243 certs.go:195] generating shared ca certs ...
	I1010 18:20:48.671744  315243 certs.go:227] acquiring lock for ca certs: {Name:mkd2ebf34e0d6ec3a7809bed8325fdc7fe2fcc31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:20:48.671901  315243 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key
	I1010 18:20:48.671949  315243 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key
	I1010 18:20:48.671960  315243 certs.go:257] generating profile certs ...
	I1010 18:20:48.672048  315243 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/client.key
	I1010 18:20:48.672135  315243 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/apiserver.key.37abe28c
	I1010 18:20:48.672172  315243 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/proxy-client.key
	I1010 18:20:48.672285  315243 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem (1338 bytes)
	W1010 18:20:48.672313  315243 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354_empty.pem, impossibly tiny 0 bytes
	I1010 18:20:48.672320  315243 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem (1675 bytes)
	I1010 18:20:48.672346  315243 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem (1082 bytes)
	I1010 18:20:48.672365  315243 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:20:48.672386  315243 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem (1675 bytes)
	I1010 18:20:48.672421  315243 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:20:48.673064  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:20:48.697896  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:20:48.721920  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:20:48.746177  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1010 18:20:48.773805  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1010 18:20:48.797763  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1010 18:20:48.821956  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:20:48.845335  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1010 18:20:48.866318  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /usr/share/ca-certificates/93542.pem (1708 bytes)
	I1010 18:20:48.890302  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:20:48.910153  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem --> /usr/share/ca-certificates/9354.pem (1338 bytes)
	I1010 18:20:48.932176  315243 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 18:20:48.953102  315243 ssh_runner.go:195] Run: openssl version
	I1010 18:20:48.961833  315243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9354.pem && ln -fs /usr/share/ca-certificates/9354.pem /etc/ssl/certs/9354.pem"
	I1010 18:20:48.974420  315243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9354.pem
	I1010 18:20:48.979097  315243 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 17:36 /usr/share/ca-certificates/9354.pem
	I1010 18:20:48.979165  315243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9354.pem
	I1010 18:20:49.017904  315243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9354.pem /etc/ssl/certs/51391683.0"
	I1010 18:20:49.028691  315243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93542.pem && ln -fs /usr/share/ca-certificates/93542.pem /etc/ssl/certs/93542.pem"
	I1010 18:20:49.045017  315243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93542.pem
	I1010 18:20:49.049108  315243 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 17:36 /usr/share/ca-certificates/93542.pem
	I1010 18:20:49.049166  315243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93542.pem
	I1010 18:20:49.085808  315243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93542.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:20:49.095911  315243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:20:49.105985  315243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:20:49.110274  315243 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:20:49.110329  315243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:20:49.150752  315243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:20:49.164858  315243 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:20:49.169330  315243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 18:20:49.221633  315243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 18:20:49.280769  315243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 18:20:49.360389  315243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 18:20:49.408955  315243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 18:20:49.448148  315243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 18:20:49.488852  315243 kubeadm.go:400] StartCluster: {Name:embed-certs-472518 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-472518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:20:49.488956  315243 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 18:20:49.489020  315243 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 18:20:49.528775  315243 cri.go:89] found id: "159136e63b21ef09e85b6efdc6b5a0f5be67f5af9a3516c5f8cae7be0af60846"
	I1010 18:20:49.528796  315243 cri.go:89] found id: "3622c66fa378c4b8614e23f6545ac6151fa6ef096364723cbdd5d22677bc0ca9"
	I1010 18:20:49.528802  315243 cri.go:89] found id: "a5c1be1847d40640048f86d96a7f93b4166d1688a8afd40971231c2b59f73202"
	I1010 18:20:49.528807  315243 cri.go:89] found id: "a52804abc0e7184b8ec037e1a9594b3794f50868b2f90978e95ba4f3dac34818"
	I1010 18:20:49.528811  315243 cri.go:89] found id: ""
	I1010 18:20:49.528852  315243 ssh_runner.go:195] Run: sudo runc list -f json
	W1010 18:20:49.546231  315243 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:20:49Z" level=error msg="open /run/runc: no such file or directory"
	I1010 18:20:49.546375  315243 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 18:20:49.558092  315243 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1010 18:20:49.558114  315243 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1010 18:20:49.558164  315243 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 18:20:49.575197  315243 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 18:20:49.575886  315243 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-472518" does not appear in /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:20:49.576504  315243 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-5815/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-472518" cluster setting kubeconfig missing "embed-certs-472518" context setting]
	I1010 18:20:49.577193  315243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:20:49.578945  315243 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 18:20:49.590650  315243 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1010 18:20:49.590685  315243 kubeadm.go:601] duration metric: took 32.565143ms to restartPrimaryControlPlane
	I1010 18:20:49.590695  315243 kubeadm.go:402] duration metric: took 101.853492ms to StartCluster
	I1010 18:20:49.590713  315243 settings.go:142] acquiring lock: {Name:mk32701f7c6313a55b8740f0862889585a36e8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:20:49.590778  315243 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:20:49.592554  315243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:20:49.592830  315243 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:20:49.592901  315243 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 18:20:49.593019  315243 config.go:182] Loaded profile config "embed-certs-472518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:20:49.593025  315243 addons.go:69] Setting dashboard=true in profile "embed-certs-472518"
	I1010 18:20:49.593043  315243 addons.go:69] Setting default-storageclass=true in profile "embed-certs-472518"
	I1010 18:20:49.593086  315243 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-472518"
	I1010 18:20:49.593067  315243 addons.go:238] Setting addon dashboard=true in "embed-certs-472518"
	W1010 18:20:49.593186  315243 addons.go:247] addon dashboard should already be in state true
	I1010 18:20:49.593234  315243 host.go:66] Checking if "embed-certs-472518" exists ...
	I1010 18:20:49.593029  315243 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-472518"
	I1010 18:20:49.593289  315243 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-472518"
	W1010 18:20:49.593302  315243 addons.go:247] addon storage-provisioner should already be in state true
	I1010 18:20:49.593335  315243 host.go:66] Checking if "embed-certs-472518" exists ...
	I1010 18:20:49.593410  315243 cli_runner.go:164] Run: docker container inspect embed-certs-472518 --format={{.State.Status}}
	I1010 18:20:49.593740  315243 cli_runner.go:164] Run: docker container inspect embed-certs-472518 --format={{.State.Status}}
	I1010 18:20:49.593886  315243 cli_runner.go:164] Run: docker container inspect embed-certs-472518 --format={{.State.Status}}
	I1010 18:20:49.595259  315243 out.go:179] * Verifying Kubernetes components...
	I1010 18:20:49.596615  315243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:20:49.621223  315243 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1010 18:20:49.621687  315243 addons.go:238] Setting addon default-storageclass=true in "embed-certs-472518"
	W1010 18:20:49.621713  315243 addons.go:247] addon default-storageclass should already be in state true
	I1010 18:20:49.621741  315243 host.go:66] Checking if "embed-certs-472518" exists ...
	I1010 18:20:49.622223  315243 cli_runner.go:164] Run: docker container inspect embed-certs-472518 --format={{.State.Status}}
	I1010 18:20:49.623807  315243 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:20:49.624706  315243 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1010 18:20:48.463897  316039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/no-preload-556024/id_rsa Username:docker}
	I1010 18:20:48.560880  316039 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1010 18:20:48.565518  316039 fix.go:56] duration metric: took 4.87607827s for fixHost
	I1010 18:20:48.565545  316039 start.go:83] releasing machines lock for "no-preload-556024", held for 4.876130567s
	I1010 18:20:48.565605  316039 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-556024
	I1010 18:20:48.590212  316039 ssh_runner.go:195] Run: cat /version.json
	I1010 18:20:48.590274  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:20:48.590309  316039 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:20:48.590374  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:20:48.611239  316039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/no-preload-556024/id_rsa Username:docker}
	I1010 18:20:48.611223  316039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/no-preload-556024/id_rsa Username:docker}
	I1010 18:20:48.707641  316039 ssh_runner.go:195] Run: systemctl --version
	I1010 18:20:48.779239  316039 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:20:48.822991  316039 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:20:48.827985  316039 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:20:48.828127  316039 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:20:48.838254  316039 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1010 18:20:48.838278  316039 start.go:495] detecting cgroup driver to use...
	I1010 18:20:48.838310  316039 detect.go:190] detected "systemd" cgroup driver on host os
	I1010 18:20:48.838375  316039 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:20:48.855699  316039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:20:48.870095  316039 docker.go:218] disabling cri-docker service (if available) ...
	I1010 18:20:48.870150  316039 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:20:48.889387  316039 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:20:48.903428  316039 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:20:49.004846  316039 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:20:49.095121  316039 docker.go:234] disabling docker service ...
	I1010 18:20:49.095195  316039 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:20:49.111399  316039 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:20:49.124564  316039 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:20:49.233199  316039 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:20:49.371321  316039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:20:49.391416  316039 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:20:49.410665  316039 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1010 18:20:49.410726  316039 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:49.422109  316039 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1010 18:20:49.422187  316039 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:49.434507  316039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:49.445435  316039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:49.456792  316039 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:20:49.467113  316039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:49.478960  316039 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:49.491083  316039 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:49.504692  316039 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:20:49.516727  316039 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:20:49.528623  316039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:20:49.657664  316039 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:20:49.845402  316039 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:20:49.845485  316039 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:20:49.849613  316039 start.go:563] Will wait 60s for crictl version
	I1010 18:20:49.849677  316039 ssh_runner.go:195] Run: which crictl
	I1010 18:20:49.853537  316039 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1010 18:20:49.887342  316039 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1010 18:20:49.887433  316039 ssh_runner.go:195] Run: crio --version
	I1010 18:20:49.930383  316039 ssh_runner.go:195] Run: crio --version
	I1010 18:20:49.976214  316039 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1010 18:20:47.762395  310776 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1010 18:20:47.766851  310776 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1010 18:20:47.766871  310776 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1010 18:20:47.783354  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1010 18:20:48.028048  310776 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 18:20:48.028155  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:20:48.028511  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-821769 minikube.k8s.io/updated_at=2025_10_10T18_20_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ad692bf4ab89f0e135b80e730ae25010479ecc46 minikube.k8s.io/name=default-k8s-diff-port-821769 minikube.k8s.io/primary=true
	I1010 18:20:48.041226  310776 ops.go:34] apiserver oom_adj: -16
	I1010 18:20:48.128327  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:20:48.629265  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:20:49.129256  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:20:49.631157  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:20:50.128594  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:20:50.629277  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:20:51.129277  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:20:49.977548  316039 cli_runner.go:164] Run: docker network inspect no-preload-556024 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 18:20:50.001823  316039 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1010 18:20:50.006080  316039 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:20:50.017957  316039 kubeadm.go:883] updating cluster {Name:no-preload-556024 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-556024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 18:20:50.018111  316039 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:20:50.018151  316039 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:20:50.065609  316039 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:20:50.065631  316039 cache_images.go:85] Images are preloaded, skipping loading
	I1010 18:20:50.065639  316039 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1010 18:20:50.065740  316039 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-556024 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-556024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:20:50.065812  316039 ssh_runner.go:195] Run: crio config
	I1010 18:20:50.129406  316039 cni.go:84] Creating CNI manager for ""
	I1010 18:20:50.129498  316039 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:20:50.129530  316039 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1010 18:20:50.129567  316039 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-556024 NodeName:no-preload-556024 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 18:20:50.129730  316039 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-556024"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 18:20:50.129812  316039 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1010 18:20:50.142159  316039 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 18:20:50.142246  316039 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 18:20:50.152351  316039 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1010 18:20:50.168174  316039 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:20:50.184704  316039 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1010 18:20:50.201688  316039 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1010 18:20:50.205576  316039 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:20:50.216719  316039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:20:50.314580  316039 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:20:50.339172  316039 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024 for IP: 192.168.76.2
	I1010 18:20:50.339196  316039 certs.go:195] generating shared ca certs ...
	I1010 18:20:50.339214  316039 certs.go:227] acquiring lock for ca certs: {Name:mkd2ebf34e0d6ec3a7809bed8325fdc7fe2fcc31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:20:50.339389  316039 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key
	I1010 18:20:50.339439  316039 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key
	I1010 18:20:50.339454  316039 certs.go:257] generating profile certs ...
	I1010 18:20:50.339572  316039 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/client.key
	I1010 18:20:50.339656  316039 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/apiserver.key.b1bc56db
	I1010 18:20:50.339729  316039 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/proxy-client.key
	I1010 18:20:50.339901  316039 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem (1338 bytes)
	W1010 18:20:50.339937  316039 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354_empty.pem, impossibly tiny 0 bytes
	I1010 18:20:50.339947  316039 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem (1675 bytes)
	I1010 18:20:50.339978  316039 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem (1082 bytes)
	I1010 18:20:50.340018  316039 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:20:50.340047  316039 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem (1675 bytes)
	I1010 18:20:50.340152  316039 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:20:50.341083  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:20:50.369071  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:20:50.396382  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:20:50.426223  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1010 18:20:50.462107  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1010 18:20:50.492175  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 18:20:50.515308  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:20:50.542463  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1010 18:20:50.567288  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /usr/share/ca-certificates/93542.pem (1708 bytes)
	I1010 18:20:50.593916  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:20:50.623441  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem --> /usr/share/ca-certificates/9354.pem (1338 bytes)
	I1010 18:20:50.661822  316039 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 18:20:50.685220  316039 ssh_runner.go:195] Run: openssl version
	I1010 18:20:50.694018  316039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9354.pem && ln -fs /usr/share/ca-certificates/9354.pem /etc/ssl/certs/9354.pem"
	I1010 18:20:50.707964  316039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9354.pem
	I1010 18:20:50.714772  316039 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 17:36 /usr/share/ca-certificates/9354.pem
	I1010 18:20:50.714863  316039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9354.pem
	I1010 18:20:50.775759  316039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9354.pem /etc/ssl/certs/51391683.0"
	I1010 18:20:50.789361  316039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93542.pem && ln -fs /usr/share/ca-certificates/93542.pem /etc/ssl/certs/93542.pem"
	I1010 18:20:50.802813  316039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93542.pem
	I1010 18:20:50.807903  316039 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 17:36 /usr/share/ca-certificates/93542.pem
	I1010 18:20:50.807966  316039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93542.pem
	I1010 18:20:50.865904  316039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93542.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:20:50.883902  316039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:20:50.901914  316039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:20:50.908945  316039 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:20:50.909005  316039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:20:50.970081  316039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:20:50.984254  316039 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:20:50.990832  316039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 18:20:51.055643  316039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 18:20:51.124755  316039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 18:20:51.195467  316039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 18:20:51.257855  316039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 18:20:51.321018  316039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 18:20:51.374149  316039 kubeadm.go:400] StartCluster: {Name:no-preload-556024 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-556024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:20:51.374313  316039 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 18:20:51.374389  316039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 18:20:51.434445  316039 cri.go:89] found id: "624948aa983f6a950a5a86e99ebbf4e3cec99b2849460ed697524b3fc4ffac05"
	I1010 18:20:51.434471  316039 cri.go:89] found id: "63abfddfe6fe2887c4901b8e265aae05ec3330bd42bd0d67e011b354a39c6023"
	I1010 18:20:51.434477  316039 cri.go:89] found id: "579953ecaa5c709ae190ac505c57c31de755d4d689b3be28199b4f18c038f574"
	I1010 18:20:51.434482  316039 cri.go:89] found id: "f690c75f2865bf33ee267a92d360114ddc8d677ee96e0e894aa2e4d900fd9adf"
	I1010 18:20:51.434486  316039 cri.go:89] found id: ""
	I1010 18:20:51.434533  316039 ssh_runner.go:195] Run: sudo runc list -f json
	W1010 18:20:51.460616  316039 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:20:51Z" level=error msg="open /run/runc: no such file or directory"
	I1010 18:20:51.460703  316039 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 18:20:51.481897  316039 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1010 18:20:51.481919  316039 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1010 18:20:51.481972  316039 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 18:20:51.498830  316039 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 18:20:51.500143  316039 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-556024" does not appear in /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:20:51.501037  316039 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-5815/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-556024" cluster setting kubeconfig missing "no-preload-556024" context setting]
	I1010 18:20:51.502303  316039 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:20:51.504699  316039 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 18:20:51.525552  316039 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1010 18:20:51.525663  316039 kubeadm.go:601] duration metric: took 43.737077ms to restartPrimaryControlPlane
	I1010 18:20:51.525704  316039 kubeadm.go:402] duration metric: took 151.565362ms to StartCluster
	I1010 18:20:51.525736  316039 settings.go:142] acquiring lock: {Name:mk32701f7c6313a55b8740f0862889585a36e8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:20:51.525837  316039 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:20:51.528729  316039 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:20:51.529336  316039 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:20:51.529408  316039 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 18:20:51.530224  316039 addons.go:69] Setting storage-provisioner=true in profile "no-preload-556024"
	I1010 18:20:51.530244  316039 addons.go:238] Setting addon storage-provisioner=true in "no-preload-556024"
	W1010 18:20:51.530252  316039 addons.go:247] addon storage-provisioner should already be in state true
	I1010 18:20:51.530282  316039 host.go:66] Checking if "no-preload-556024" exists ...
	I1010 18:20:51.530800  316039 cli_runner.go:164] Run: docker container inspect no-preload-556024 --format={{.State.Status}}
	I1010 18:20:51.531125  316039 addons.go:69] Setting dashboard=true in profile "no-preload-556024"
	I1010 18:20:51.531155  316039 addons.go:238] Setting addon dashboard=true in "no-preload-556024"
	I1010 18:20:51.531200  316039 addons.go:69] Setting default-storageclass=true in profile "no-preload-556024"
	W1010 18:20:51.531164  316039 addons.go:247] addon dashboard should already be in state true
	I1010 18:20:51.531221  316039 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-556024"
	I1010 18:20:51.531252  316039 host.go:66] Checking if "no-preload-556024" exists ...
	I1010 18:20:51.531518  316039 cli_runner.go:164] Run: docker container inspect no-preload-556024 --format={{.State.Status}}
	I1010 18:20:51.531721  316039 cli_runner.go:164] Run: docker container inspect no-preload-556024 --format={{.State.Status}}
	I1010 18:20:51.532678  316039 out.go:179] * Verifying Kubernetes components...
	I1010 18:20:51.529573  316039 config.go:182] Loaded profile config "no-preload-556024": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:20:51.533687  316039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:20:51.568923  316039 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:20:51.570126  316039 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:20:51.570179  316039 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 18:20:51.570260  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:20:51.572781  316039 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1010 18:20:51.573105  316039 addons.go:238] Setting addon default-storageclass=true in "no-preload-556024"
	W1010 18:20:51.573167  316039 addons.go:247] addon default-storageclass should already be in state true
	I1010 18:20:51.573209  316039 host.go:66] Checking if "no-preload-556024" exists ...
	I1010 18:20:51.573839  316039 cli_runner.go:164] Run: docker container inspect no-preload-556024 --format={{.State.Status}}
	I1010 18:20:51.574682  316039 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1010 18:20:49.625348  315243 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:20:49.625366  315243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 18:20:49.625419  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:49.625898  315243 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1010 18:20:49.625914  315243 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1010 18:20:49.625963  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:49.665940  315243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:20:49.667220  315243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:20:49.670104  315243 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 18:20:49.670128  315243 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 18:20:49.670179  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:49.701992  315243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:20:49.790496  315243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:20:49.808286  315243 node_ready.go:35] waiting up to 6m0s for node "embed-certs-472518" to be "Ready" ...
	I1010 18:20:49.877948  315243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 18:20:49.900523  315243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:20:49.904789  315243 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1010 18:20:49.904813  315243 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1010 18:20:49.926463  315243 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1010 18:20:49.926491  315243 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1010 18:20:49.948786  315243 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1010 18:20:49.948861  315243 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1010 18:20:49.970537  315243 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1010 18:20:49.970713  315243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1010 18:20:49.991031  315243 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1010 18:20:49.991096  315243 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1010 18:20:50.007758  315243 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1010 18:20:50.007779  315243 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1010 18:20:50.024836  315243 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1010 18:20:50.024870  315243 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1010 18:20:50.047286  315243 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1010 18:20:50.047312  315243 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1010 18:20:50.066137  315243 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1010 18:20:50.066162  315243 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1010 18:20:50.082085  315243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1010 18:20:51.728301  315243 node_ready.go:49] node "embed-certs-472518" is "Ready"
	I1010 18:20:51.728406  315243 node_ready.go:38] duration metric: took 1.920029979s for node "embed-certs-472518" to be "Ready" ...
	I1010 18:20:51.728515  315243 api_server.go:52] waiting for apiserver process to appear ...
	I1010 18:20:51.728588  315243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 18:20:51.628908  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:20:52.129081  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:20:52.628493  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:20:52.744888  310776 kubeadm.go:1113] duration metric: took 4.716782723s to wait for elevateKubeSystemPrivileges
	I1010 18:20:52.744920  310776 kubeadm.go:402] duration metric: took 15.95079426s to StartCluster
	I1010 18:20:52.744940  310776 settings.go:142] acquiring lock: {Name:mk32701f7c6313a55b8740f0862889585a36e8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:20:52.745008  310776 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:20:52.748330  310776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:20:52.748752  310776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1010 18:20:52.749079  310776 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 18:20:52.749218  310776 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-821769"
	I1010 18:20:52.749252  310776 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-821769"
	I1010 18:20:52.749700  310776 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:20:52.749995  310776 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-821769"
	I1010 18:20:52.751220  310776 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-821769"
	I1010 18:20:52.751275  310776 host.go:66] Checking if "default-k8s-diff-port-821769" exists ...
	I1010 18:20:52.750124  310776 config.go:182] Loaded profile config "default-k8s-diff-port-821769": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:20:52.750164  310776 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:20:52.751814  310776 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:20:52.754296  310776 out.go:179] * Verifying Kubernetes components...
	I1010 18:20:52.757073  310776 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:20:52.784878  310776 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-821769"
	I1010 18:20:52.784930  310776 host.go:66] Checking if "default-k8s-diff-port-821769" exists ...
	I1010 18:20:52.785459  310776 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:20:52.789598  310776 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:20:51.884191  315243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.006198253s)
	I1010 18:20:53.049905  315243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.14934381s)
	I1010 18:20:53.050041  315243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.967926322s)
	I1010 18:20:53.050251  315243 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.321637338s)
	I1010 18:20:53.050277  315243 api_server.go:72] duration metric: took 3.4574213s to wait for apiserver process to appear ...
	I1010 18:20:53.050285  315243 api_server.go:88] waiting for apiserver healthz status ...
	I1010 18:20:53.050312  315243 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1010 18:20:53.052034  315243 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-472518 addons enable metrics-server
	
	I1010 18:20:53.053389  315243 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1010 18:20:51.575500  316039 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1010 18:20:51.575526  316039 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1010 18:20:51.575614  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:20:51.610790  316039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/no-preload-556024/id_rsa Username:docker}
	I1010 18:20:51.615370  316039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/no-preload-556024/id_rsa Username:docker}
	I1010 18:20:51.619423  316039 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 18:20:51.619520  316039 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 18:20:51.619582  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:20:51.653223  316039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/no-preload-556024/id_rsa Username:docker}
	I1010 18:20:51.773499  316039 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:20:51.814711  316039 node_ready.go:35] waiting up to 6m0s for node "no-preload-556024" to be "Ready" ...
	I1010 18:20:51.914904  316039 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1010 18:20:51.914932  316039 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1010 18:20:51.923500  316039 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:20:51.949787  316039 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 18:20:51.968366  316039 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1010 18:20:51.968396  316039 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1010 18:20:52.039689  316039 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1010 18:20:52.039716  316039 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1010 18:20:52.098625  316039 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1010 18:20:52.098653  316039 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1010 18:20:52.167741  316039 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1010 18:20:52.167801  316039 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1010 18:20:52.219328  316039 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1010 18:20:52.219352  316039 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1010 18:20:52.265308  316039 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1010 18:20:52.265341  316039 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1010 18:20:52.313716  316039 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1010 18:20:52.313766  316039 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1010 18:20:52.352592  316039 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1010 18:20:52.352644  316039 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1010 18:20:52.387452  316039 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1010 18:20:52.790760  310776 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:20:52.790790  310776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 18:20:52.790870  310776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:20:52.821845  310776 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 18:20:52.821873  310776 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 18:20:52.821928  310776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:20:52.827947  310776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:20:52.860145  310776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:20:52.948729  310776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1010 18:20:52.991756  310776 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:20:53.123700  310776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 18:20:53.139884  310776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:20:53.325308  310776 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1010 18:20:53.330566  310776 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-821769" to be "Ready" ...
	I1010 18:20:53.592278  310776 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1010 18:20:50.034107  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	W1010 18:20:52.042686  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	I1010 18:20:54.467305  316039 node_ready.go:49] node "no-preload-556024" is "Ready"
	I1010 18:20:54.467335  316039 node_ready.go:38] duration metric: took 2.652575598s for node "no-preload-556024" to be "Ready" ...
	I1010 18:20:54.467351  316039 api_server.go:52] waiting for apiserver process to appear ...
	I1010 18:20:54.467400  316039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 18:20:55.159684  316039 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.236143034s)
	I1010 18:20:55.159770  316039 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.209938968s)
	I1010 18:20:55.159920  316039 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.772426078s)
	I1010 18:20:55.159956  316039 api_server.go:72] duration metric: took 3.630212814s to wait for apiserver process to appear ...
	I1010 18:20:55.159971  316039 api_server.go:88] waiting for apiserver healthz status ...
	I1010 18:20:55.159989  316039 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1010 18:20:55.165079  316039 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 18:20:55.165108  316039 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 18:20:55.171486  316039 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-556024 addons enable metrics-server
	
	I1010 18:20:55.172798  316039 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1010 18:20:53.593383  310776 addons.go:514] duration metric: took 844.300192ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1010 18:20:53.831818  310776 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-821769" context rescaled to 1 replicas
	W1010 18:20:55.334794  310776 node_ready.go:57] node "default-k8s-diff-port-821769" has "Ready":"False" status (will retry)
	I1010 18:20:53.054435  315243 addons.go:514] duration metric: took 3.461533728s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1010 18:20:53.058403  315243 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 18:20:53.058478  315243 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 18:20:53.551135  315243 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1010 18:20:53.557162  315243 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1010 18:20:53.558501  315243 api_server.go:141] control plane version: v1.34.1
	I1010 18:20:53.558524  315243 api_server.go:131] duration metric: took 508.226677ms to wait for apiserver health ...
	I1010 18:20:53.558535  315243 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 18:20:53.563761  315243 system_pods.go:59] 8 kube-system pods found
	I1010 18:20:53.563802  315243 system_pods.go:61] "coredns-66bc5c9577-hrcxc" [98494133-86f7-4d52-9de0-1b648c4e1eac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:20:53.563840  315243 system_pods.go:61] "etcd-embed-certs-472518" [ef258b42-940e-4df8-bda7-2abda18693ec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 18:20:53.563851  315243 system_pods.go:61] "kindnet-kpr69" [a2bc6e25-f261-43aa-b10b-35757900e93b] Running
	I1010 18:20:53.563861  315243 system_pods.go:61] "kube-apiserver-embed-certs-472518" [d3c6aec3-5dbe-4bda-a057-5ac1cacd6dc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 18:20:53.563870  315243 system_pods.go:61] "kube-controller-manager-embed-certs-472518" [35d677fb-3f5f-4b3e-8175-60234a80c67e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 18:20:53.563877  315243 system_pods.go:61] "kube-proxy-bq985" [e2d6bf76-4b03-4118-b61b-605d27646095] Running
	I1010 18:20:53.563888  315243 system_pods.go:61] "kube-scheduler-embed-certs-472518" [7ebab2fe-6192-45eb-80a1-a169ea655e6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 18:20:53.563912  315243 system_pods.go:61] "storage-provisioner" [3237266d-6c19-4af5-aef2-8d99c561d535] Running
	I1010 18:20:53.563925  315243 system_pods.go:74] duration metric: took 5.382708ms to wait for pod list to return data ...
	I1010 18:20:53.563947  315243 default_sa.go:34] waiting for default service account to be created ...
	I1010 18:20:53.566753  315243 default_sa.go:45] found service account: "default"
	I1010 18:20:53.566775  315243 default_sa.go:55] duration metric: took 2.816607ms for default service account to be created ...
	I1010 18:20:53.566784  315243 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 18:20:53.569996  315243 system_pods.go:86] 8 kube-system pods found
	I1010 18:20:53.570035  315243 system_pods.go:89] "coredns-66bc5c9577-hrcxc" [98494133-86f7-4d52-9de0-1b648c4e1eac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:20:53.570047  315243 system_pods.go:89] "etcd-embed-certs-472518" [ef258b42-940e-4df8-bda7-2abda18693ec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 18:20:53.570092  315243 system_pods.go:89] "kindnet-kpr69" [a2bc6e25-f261-43aa-b10b-35757900e93b] Running
	I1010 18:20:53.570102  315243 system_pods.go:89] "kube-apiserver-embed-certs-472518" [d3c6aec3-5dbe-4bda-a057-5ac1cacd6dc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 18:20:53.570118  315243 system_pods.go:89] "kube-controller-manager-embed-certs-472518" [35d677fb-3f5f-4b3e-8175-60234a80c67e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 18:20:53.570132  315243 system_pods.go:89] "kube-proxy-bq985" [e2d6bf76-4b03-4118-b61b-605d27646095] Running
	I1010 18:20:53.570140  315243 system_pods.go:89] "kube-scheduler-embed-certs-472518" [7ebab2fe-6192-45eb-80a1-a169ea655e6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 18:20:53.570145  315243 system_pods.go:89] "storage-provisioner" [3237266d-6c19-4af5-aef2-8d99c561d535] Running
	I1010 18:20:53.570154  315243 system_pods.go:126] duration metric: took 3.363508ms to wait for k8s-apps to be running ...
	I1010 18:20:53.570169  315243 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 18:20:53.570223  315243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:20:53.589472  315243 system_svc.go:56] duration metric: took 19.294939ms WaitForService to wait for kubelet
	I1010 18:20:53.589498  315243 kubeadm.go:586] duration metric: took 3.99664162s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:20:53.589514  315243 node_conditions.go:102] verifying NodePressure condition ...
	I1010 18:20:53.593679  315243 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1010 18:20:53.593708  315243 node_conditions.go:123] node cpu capacity is 8
	I1010 18:20:53.593724  315243 node_conditions.go:105] duration metric: took 4.204587ms to run NodePressure ...
	I1010 18:20:53.593743  315243 start.go:241] waiting for startup goroutines ...
	I1010 18:20:53.593753  315243 start.go:246] waiting for cluster config update ...
	I1010 18:20:53.593767  315243 start.go:255] writing updated cluster config ...
	I1010 18:20:53.594097  315243 ssh_runner.go:195] Run: rm -f paused
	I1010 18:20:53.599326  315243 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 18:20:53.605128  315243 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hrcxc" in "kube-system" namespace to be "Ready" or be gone ...
	W1010 18:20:55.615427  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	I1010 18:20:55.173737  316039 addons.go:514] duration metric: took 3.644339704s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1010 18:20:55.660767  316039 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1010 18:20:55.667019  316039 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 18:20:55.667122  316039 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 18:20:56.160831  316039 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1010 18:20:56.166112  316039 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1010 18:20:56.167511  316039 api_server.go:141] control plane version: v1.34.1
	I1010 18:20:56.167538  316039 api_server.go:131] duration metric: took 1.007560189s to wait for apiserver health ...
	I1010 18:20:56.167549  316039 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 18:20:56.171980  316039 system_pods.go:59] 8 kube-system pods found
	I1010 18:20:56.172028  316039 system_pods.go:61] "coredns-66bc5c9577-wpsrd" [316be091-2de7-417c-b44b-1d26108e3ed3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:20:56.172041  316039 system_pods.go:61] "etcd-no-preload-556024" [0f8f77e3-e838-4f27-9f17-2cd264198574] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 18:20:56.172063  316039 system_pods.go:61] "kindnet-wsk6h" [71384861-5289-4d2b-8d62-b7d2c27d86b8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1010 18:20:56.172073  316039 system_pods.go:61] "kube-apiserver-no-preload-556024" [7efe66ae-83bf-4ea5-a271-d8e944f74053] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 18:20:56.172083  316039 system_pods.go:61] "kube-controller-manager-no-preload-556024" [9e7fbd67-ce38-425d-b80d-b8ff3748fa70] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 18:20:56.172091  316039 system_pods.go:61] "kube-proxy-frchp" [3457ebf4-7608-4c78-b8dc-3a92a2fb32ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1010 18:20:56.172099  316039 system_pods.go:61] "kube-scheduler-no-preload-556024" [c6fb51f0-cf8d-4a56-aba5-95aff4190b44] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 18:20:56.172107  316039 system_pods.go:61] "storage-provisioner" [42a21c5e-4318-43f7-8d2a-dc62676b17c2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 18:20:56.172115  316039 system_pods.go:74] duration metric: took 4.558605ms to wait for pod list to return data ...
	I1010 18:20:56.172125  316039 default_sa.go:34] waiting for default service account to be created ...
	I1010 18:20:56.174926  316039 default_sa.go:45] found service account: "default"
	I1010 18:20:56.174945  316039 default_sa.go:55] duration metric: took 2.814097ms for default service account to be created ...
	I1010 18:20:56.174954  316039 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 18:20:56.177615  316039 system_pods.go:86] 8 kube-system pods found
	I1010 18:20:56.177644  316039 system_pods.go:89] "coredns-66bc5c9577-wpsrd" [316be091-2de7-417c-b44b-1d26108e3ed3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:20:56.177653  316039 system_pods.go:89] "etcd-no-preload-556024" [0f8f77e3-e838-4f27-9f17-2cd264198574] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 18:20:56.177664  316039 system_pods.go:89] "kindnet-wsk6h" [71384861-5289-4d2b-8d62-b7d2c27d86b8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1010 18:20:56.177673  316039 system_pods.go:89] "kube-apiserver-no-preload-556024" [7efe66ae-83bf-4ea5-a271-d8e944f74053] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 18:20:56.177683  316039 system_pods.go:89] "kube-controller-manager-no-preload-556024" [9e7fbd67-ce38-425d-b80d-b8ff3748fa70] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 18:20:56.177697  316039 system_pods.go:89] "kube-proxy-frchp" [3457ebf4-7608-4c78-b8dc-3a92a2fb32ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1010 18:20:56.177706  316039 system_pods.go:89] "kube-scheduler-no-preload-556024" [c6fb51f0-cf8d-4a56-aba5-95aff4190b44] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 18:20:56.177717  316039 system_pods.go:89] "storage-provisioner" [42a21c5e-4318-43f7-8d2a-dc62676b17c2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 18:20:56.177725  316039 system_pods.go:126] duration metric: took 2.765119ms to wait for k8s-apps to be running ...
	I1010 18:20:56.177734  316039 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 18:20:56.177779  316039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:20:56.195926  316039 system_svc.go:56] duration metric: took 18.185245ms WaitForService to wait for kubelet
	I1010 18:20:56.195953  316039 kubeadm.go:586] duration metric: took 4.666211157s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:20:56.195977  316039 node_conditions.go:102] verifying NodePressure condition ...
	I1010 18:20:56.199540  316039 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1010 18:20:56.199578  316039 node_conditions.go:123] node cpu capacity is 8
	I1010 18:20:56.199596  316039 node_conditions.go:105] duration metric: took 3.612981ms to run NodePressure ...
	I1010 18:20:56.199610  316039 start.go:241] waiting for startup goroutines ...
	I1010 18:20:56.199621  316039 start.go:246] waiting for cluster config update ...
	I1010 18:20:56.199635  316039 start.go:255] writing updated cluster config ...
	I1010 18:20:56.199914  316039 ssh_runner.go:195] Run: rm -f paused
	I1010 18:20:56.205011  316039 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 18:20:56.210819  316039 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wpsrd" in "kube-system" namespace to be "Ready" or be gone ...
	W1010 18:20:58.216565  316039 pod_ready.go:104] pod "coredns-66bc5c9577-wpsrd" is not "Ready", error: <nil>
	W1010 18:20:54.534826  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	W1010 18:20:56.537833  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	W1010 18:20:57.839422  310776 node_ready.go:57] node "default-k8s-diff-port-821769" has "Ready":"False" status (will retry)
	W1010 18:21:00.334364  310776 node_ready.go:57] node "default-k8s-diff-port-821769" has "Ready":"False" status (will retry)
	W1010 18:20:58.112627  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	W1010 18:21:00.611479  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	W1010 18:21:00.219211  316039 pod_ready.go:104] pod "coredns-66bc5c9577-wpsrd" is not "Ready", error: <nil>
	W1010 18:21:02.732835  316039 pod_ready.go:104] pod "coredns-66bc5c9577-wpsrd" is not "Ready", error: <nil>
	W1010 18:20:59.033775  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	W1010 18:21:01.532897  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	W1010 18:21:02.334772  310776 node_ready.go:57] node "default-k8s-diff-port-821769" has "Ready":"False" status (will retry)
	I1010 18:21:04.334550  310776 node_ready.go:49] node "default-k8s-diff-port-821769" is "Ready"
	I1010 18:21:04.334584  310776 node_ready.go:38] duration metric: took 11.003942186s for node "default-k8s-diff-port-821769" to be "Ready" ...
	I1010 18:21:04.334602  310776 api_server.go:52] waiting for apiserver process to appear ...
	I1010 18:21:04.334661  310776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 18:21:04.352414  310776 api_server.go:72] duration metric: took 11.600692282s to wait for apiserver process to appear ...
	I1010 18:21:04.352440  310776 api_server.go:88] waiting for apiserver healthz status ...
	I1010 18:21:04.352461  310776 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1010 18:21:04.357202  310776 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1010 18:21:04.358448  310776 api_server.go:141] control plane version: v1.34.1
	I1010 18:21:04.358475  310776 api_server.go:131] duration metric: took 6.027777ms to wait for apiserver health ...
	I1010 18:21:04.358486  310776 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 18:21:04.362525  310776 system_pods.go:59] 8 kube-system pods found
	I1010 18:21:04.362567  310776 system_pods.go:61] "coredns-66bc5c9577-wrz5v" [7a6485d8-d7c2-4cdc-a015-68b7754aa396] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:21:04.362576  310776 system_pods.go:61] "etcd-default-k8s-diff-port-821769" [b5edacc6-aaa2-4ee9-b0b1-330ce9248047] Running
	I1010 18:21:04.362584  310776 system_pods.go:61] "kindnet-4w475" [f4b100ab-44a4-49d1-bae7-d7dbdd293a80] Running
	I1010 18:21:04.362590  310776 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-821769" [d5671f82-586b-4ce8-954c-d0779d0759ae] Running
	I1010 18:21:04.362597  310776 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-821769" [04b0efc5-436e-4138-bbbc-ecb536f5118e] Running
	I1010 18:21:04.362604  310776 system_pods.go:61] "kube-proxy-h2mzf" [0598db95-c0fc-49b8-a15b-26e4f96ed49c] Running
	I1010 18:21:04.362609  310776 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-821769" [e99518f9-57ed-46f5-b338-ba281829307d] Running
	I1010 18:21:04.362621  310776 system_pods.go:61] "storage-provisioner" [63ba31a4-0bea-47b8-92f4-453fa7d83aea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 18:21:04.362634  310776 system_pods.go:74] duration metric: took 4.14166ms to wait for pod list to return data ...
	I1010 18:21:04.362650  310776 default_sa.go:34] waiting for default service account to be created ...
	I1010 18:21:04.365765  310776 default_sa.go:45] found service account: "default"
	I1010 18:21:04.365790  310776 default_sa.go:55] duration metric: took 3.13114ms for default service account to be created ...
	I1010 18:21:04.365801  310776 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 18:21:04.368917  310776 system_pods.go:86] 8 kube-system pods found
	I1010 18:21:04.368948  310776 system_pods.go:89] "coredns-66bc5c9577-wrz5v" [7a6485d8-d7c2-4cdc-a015-68b7754aa396] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:21:04.368953  310776 system_pods.go:89] "etcd-default-k8s-diff-port-821769" [b5edacc6-aaa2-4ee9-b0b1-330ce9248047] Running
	I1010 18:21:04.368962  310776 system_pods.go:89] "kindnet-4w475" [f4b100ab-44a4-49d1-bae7-d7dbdd293a80] Running
	I1010 18:21:04.368966  310776 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-821769" [d5671f82-586b-4ce8-954c-d0779d0759ae] Running
	I1010 18:21:04.368970  310776 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-821769" [04b0efc5-436e-4138-bbbc-ecb536f5118e] Running
	I1010 18:21:04.368973  310776 system_pods.go:89] "kube-proxy-h2mzf" [0598db95-c0fc-49b8-a15b-26e4f96ed49c] Running
	I1010 18:21:04.368977  310776 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-821769" [e99518f9-57ed-46f5-b338-ba281829307d] Running
	I1010 18:21:04.368982  310776 system_pods.go:89] "storage-provisioner" [63ba31a4-0bea-47b8-92f4-453fa7d83aea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 18:21:04.369018  310776 retry.go:31] will retry after 236.267744ms: missing components: kube-dns
	I1010 18:21:04.617498  310776 system_pods.go:86] 8 kube-system pods found
	I1010 18:21:04.617554  310776 system_pods.go:89] "coredns-66bc5c9577-wrz5v" [7a6485d8-d7c2-4cdc-a015-68b7754aa396] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:21:04.617563  310776 system_pods.go:89] "etcd-default-k8s-diff-port-821769" [b5edacc6-aaa2-4ee9-b0b1-330ce9248047] Running
	I1010 18:21:04.617572  310776 system_pods.go:89] "kindnet-4w475" [f4b100ab-44a4-49d1-bae7-d7dbdd293a80] Running
	I1010 18:21:04.617577  310776 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-821769" [d5671f82-586b-4ce8-954c-d0779d0759ae] Running
	I1010 18:21:04.617583  310776 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-821769" [04b0efc5-436e-4138-bbbc-ecb536f5118e] Running
	I1010 18:21:04.617588  310776 system_pods.go:89] "kube-proxy-h2mzf" [0598db95-c0fc-49b8-a15b-26e4f96ed49c] Running
	I1010 18:21:04.617593  310776 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-821769" [e99518f9-57ed-46f5-b338-ba281829307d] Running
	I1010 18:21:04.617600  310776 system_pods.go:89] "storage-provisioner" [63ba31a4-0bea-47b8-92f4-453fa7d83aea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 18:21:04.617679  310776 retry.go:31] will retry after 358.019281ms: missing components: kube-dns
	I1010 18:21:04.980610  310776 system_pods.go:86] 8 kube-system pods found
	I1010 18:21:04.980648  310776 system_pods.go:89] "coredns-66bc5c9577-wrz5v" [7a6485d8-d7c2-4cdc-a015-68b7754aa396] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:21:04.980657  310776 system_pods.go:89] "etcd-default-k8s-diff-port-821769" [b5edacc6-aaa2-4ee9-b0b1-330ce9248047] Running
	I1010 18:21:04.980665  310776 system_pods.go:89] "kindnet-4w475" [f4b100ab-44a4-49d1-bae7-d7dbdd293a80] Running
	I1010 18:21:04.980671  310776 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-821769" [d5671f82-586b-4ce8-954c-d0779d0759ae] Running
	I1010 18:21:04.980677  310776 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-821769" [04b0efc5-436e-4138-bbbc-ecb536f5118e] Running
	I1010 18:21:04.980682  310776 system_pods.go:89] "kube-proxy-h2mzf" [0598db95-c0fc-49b8-a15b-26e4f96ed49c] Running
	I1010 18:21:04.980691  310776 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-821769" [e99518f9-57ed-46f5-b338-ba281829307d] Running
	I1010 18:21:04.980698  310776 system_pods.go:89] "storage-provisioner" [63ba31a4-0bea-47b8-92f4-453fa7d83aea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 18:21:04.980718  310776 retry.go:31] will retry after 460.448201ms: missing components: kube-dns
	I1010 18:21:05.476108  310776 system_pods.go:86] 8 kube-system pods found
	I1010 18:21:05.476135  310776 system_pods.go:89] "coredns-66bc5c9577-wrz5v" [7a6485d8-d7c2-4cdc-a015-68b7754aa396] Running
	I1010 18:21:05.476141  310776 system_pods.go:89] "etcd-default-k8s-diff-port-821769" [b5edacc6-aaa2-4ee9-b0b1-330ce9248047] Running
	I1010 18:21:05.476147  310776 system_pods.go:89] "kindnet-4w475" [f4b100ab-44a4-49d1-bae7-d7dbdd293a80] Running
	I1010 18:21:05.476153  310776 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-821769" [d5671f82-586b-4ce8-954c-d0779d0759ae] Running
	I1010 18:21:05.476158  310776 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-821769" [04b0efc5-436e-4138-bbbc-ecb536f5118e] Running
	I1010 18:21:05.476164  310776 system_pods.go:89] "kube-proxy-h2mzf" [0598db95-c0fc-49b8-a15b-26e4f96ed49c] Running
	I1010 18:21:05.476169  310776 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-821769" [e99518f9-57ed-46f5-b338-ba281829307d] Running
	I1010 18:21:05.476175  310776 system_pods.go:89] "storage-provisioner" [63ba31a4-0bea-47b8-92f4-453fa7d83aea] Running
	I1010 18:21:05.476185  310776 system_pods.go:126] duration metric: took 1.110376994s to wait for k8s-apps to be running ...
	I1010 18:21:05.476203  310776 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 18:21:05.476263  310776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:21:05.491314  310776 system_svc.go:56] duration metric: took 15.10412ms WaitForService to wait for kubelet
	I1010 18:21:05.491339  310776 kubeadm.go:586] duration metric: took 12.739624944s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:21:05.491357  310776 node_conditions.go:102] verifying NodePressure condition ...
	I1010 18:21:05.494549  310776 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1010 18:21:05.494574  310776 node_conditions.go:123] node cpu capacity is 8
	I1010 18:21:05.494597  310776 node_conditions.go:105] duration metric: took 3.235725ms to run NodePressure ...
	I1010 18:21:05.494610  310776 start.go:241] waiting for startup goroutines ...
	I1010 18:21:05.494620  310776 start.go:246] waiting for cluster config update ...
	I1010 18:21:05.494635  310776 start.go:255] writing updated cluster config ...
	I1010 18:21:05.505739  310776 ssh_runner.go:195] Run: rm -f paused
	I1010 18:21:05.510435  310776 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 18:21:05.514397  310776 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wrz5v" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:05.519411  310776 pod_ready.go:94] pod "coredns-66bc5c9577-wrz5v" is "Ready"
	I1010 18:21:05.519440  310776 pod_ready.go:86] duration metric: took 5.021224ms for pod "coredns-66bc5c9577-wrz5v" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:05.521798  310776 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:05.526425  310776 pod_ready.go:94] pod "etcd-default-k8s-diff-port-821769" is "Ready"
	I1010 18:21:05.526453  310776 pod_ready.go:86] duration metric: took 4.627916ms for pod "etcd-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:05.528777  310776 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:05.533585  310776 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-821769" is "Ready"
	I1010 18:21:05.533610  310776 pod_ready.go:86] duration metric: took 4.808877ms for pod "kube-apiserver-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:05.535771  310776 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:05.915199  310776 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-821769" is "Ready"
	I1010 18:21:05.915227  310776 pod_ready.go:86] duration metric: took 379.433579ms for pod "kube-controller-manager-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:06.115325  310776 pod_ready.go:83] waiting for pod "kube-proxy-h2mzf" in "kube-system" namespace to be "Ready" or be gone ...
	W1010 18:21:02.613407  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	W1010 18:21:04.613477  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	I1010 18:21:06.515281  310776 pod_ready.go:94] pod "kube-proxy-h2mzf" is "Ready"
	I1010 18:21:06.515310  310776 pod_ready.go:86] duration metric: took 399.959779ms for pod "kube-proxy-h2mzf" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:06.716017  310776 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:07.115133  310776 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-821769" is "Ready"
	I1010 18:21:07.115162  310776 pod_ready.go:86] duration metric: took 399.114099ms for pod "kube-scheduler-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:07.115176  310776 pod_ready.go:40] duration metric: took 1.604699188s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 18:21:07.163929  310776 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1010 18:21:07.192097  310776 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-821769" cluster and "default" namespace by default
	W1010 18:21:05.217220  316039 pod_ready.go:104] pod "coredns-66bc5c9577-wpsrd" is not "Ready", error: <nil>
	W1010 18:21:07.716808  316039 pod_ready.go:104] pod "coredns-66bc5c9577-wpsrd" is not "Ready", error: <nil>
	W1010 18:21:04.032734  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	W1010 18:21:06.531020  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	W1010 18:21:08.531357  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	I1010 18:21:09.532675  309154 pod_ready.go:94] pod "coredns-5dd5756b68-qfwck" is "Ready"
	I1010 18:21:09.532706  309154 pod_ready.go:86] duration metric: took 32.006855812s for pod "coredns-5dd5756b68-qfwck" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:09.535886  309154 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:09.540776  309154 pod_ready.go:94] pod "etcd-old-k8s-version-141193" is "Ready"
	I1010 18:21:09.540797  309154 pod_ready.go:86] duration metric: took 4.887324ms for pod "etcd-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:09.543453  309154 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:09.547188  309154 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-141193" is "Ready"
	I1010 18:21:09.547212  309154 pod_ready.go:86] duration metric: took 3.738135ms for pod "kube-apiserver-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:09.549745  309154 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:09.730359  309154 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-141193" is "Ready"
	I1010 18:21:09.730391  309154 pod_ready.go:86] duration metric: took 180.622284ms for pod "kube-controller-manager-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:09.930224  309154 pod_ready.go:83] waiting for pod "kube-proxy-n9klp" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:10.329749  309154 pod_ready.go:94] pod "kube-proxy-n9klp" is "Ready"
	I1010 18:21:10.329777  309154 pod_ready.go:86] duration metric: took 399.527981ms for pod "kube-proxy-n9klp" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:10.533434  309154 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:10.930255  309154 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-141193" is "Ready"
	I1010 18:21:10.930280  309154 pod_ready.go:86] duration metric: took 396.81759ms for pod "kube-scheduler-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:10.930291  309154 pod_ready.go:40] duration metric: took 33.409574947s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 18:21:10.976268  309154 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1010 18:21:10.978153  309154 out.go:203] 
	W1010 18:21:10.979362  309154 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1010 18:21:10.980507  309154 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1010 18:21:10.981654  309154 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-141193" cluster and "default" namespace by default
	W1010 18:21:07.110875  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	W1010 18:21:09.610687  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	W1010 18:21:11.612648  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	W1010 18:21:09.717125  316039 pod_ready.go:104] pod "coredns-66bc5c9577-wpsrd" is not "Ready", error: <nil>
	W1010 18:21:12.215991  316039 pod_ready.go:104] pod "coredns-66bc5c9577-wpsrd" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 10 18:21:04 default-k8s-diff-port-821769 crio[776]: time="2025-10-10T18:21:04.63680424Z" level=info msg="Starting container: 69579c33953caf22566162e86e11627c7bd2b22ed5fd2277770284cd04b19661" id=f640f525-de4d-46a6-b979-c902d313da1c name=/runtime.v1.RuntimeService/StartContainer
	Oct 10 18:21:04 default-k8s-diff-port-821769 crio[776]: time="2025-10-10T18:21:04.64190621Z" level=info msg="Started container" PID=1866 containerID=69579c33953caf22566162e86e11627c7bd2b22ed5fd2277770284cd04b19661 description=kube-system/coredns-66bc5c9577-wrz5v/coredns id=f640f525-de4d-46a6-b979-c902d313da1c name=/runtime.v1.RuntimeService/StartContainer sandboxID=272ba5e945cafdd59e9087e173ae45ac438350e24b2877bf90b35f02603a5171
	Oct 10 18:21:07 default-k8s-diff-port-821769 crio[776]: time="2025-10-10T18:21:07.697342223Z" level=info msg="Running pod sandbox: default/busybox/POD" id=01fa0992-173d-4495-9723-4d98f7e2f9f0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 10 18:21:07 default-k8s-diff-port-821769 crio[776]: time="2025-10-10T18:21:07.697456756Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:07 default-k8s-diff-port-821769 crio[776]: time="2025-10-10T18:21:07.702295799Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:cefa34c6e6b10c6fd2f403c3d636a106594f389f0176cc4fc4e5408df69a2029 UID:2756f9b0-fbc0-4e80-9636-d7ae1972908b NetNS:/var/run/netns/29403f42-31e7-4975-b02c-fcbb73b02784 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000d925e8}] Aliases:map[]}"
	Oct 10 18:21:07 default-k8s-diff-port-821769 crio[776]: time="2025-10-10T18:21:07.702325368Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 10 18:21:07 default-k8s-diff-port-821769 crio[776]: time="2025-10-10T18:21:07.71245784Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:cefa34c6e6b10c6fd2f403c3d636a106594f389f0176cc4fc4e5408df69a2029 UID:2756f9b0-fbc0-4e80-9636-d7ae1972908b NetNS:/var/run/netns/29403f42-31e7-4975-b02c-fcbb73b02784 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000d925e8}] Aliases:map[]}"
	Oct 10 18:21:07 default-k8s-diff-port-821769 crio[776]: time="2025-10-10T18:21:07.712624425Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 10 18:21:07 default-k8s-diff-port-821769 crio[776]: time="2025-10-10T18:21:07.713626211Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 10 18:21:07 default-k8s-diff-port-821769 crio[776]: time="2025-10-10T18:21:07.714622444Z" level=info msg="Ran pod sandbox cefa34c6e6b10c6fd2f403c3d636a106594f389f0176cc4fc4e5408df69a2029 with infra container: default/busybox/POD" id=01fa0992-173d-4495-9723-4d98f7e2f9f0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 10 18:21:07 default-k8s-diff-port-821769 crio[776]: time="2025-10-10T18:21:07.715924209Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ac6a2a88-2119-4d31-8115-9558346e518c name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:21:07 default-k8s-diff-port-821769 crio[776]: time="2025-10-10T18:21:07.716083786Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=ac6a2a88-2119-4d31-8115-9558346e518c name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:21:07 default-k8s-diff-port-821769 crio[776]: time="2025-10-10T18:21:07.716150345Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=ac6a2a88-2119-4d31-8115-9558346e518c name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:21:07 default-k8s-diff-port-821769 crio[776]: time="2025-10-10T18:21:07.71702714Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ef6e1f55-3e9e-4324-8a46-17c082c1fbf9 name=/runtime.v1.ImageService/PullImage
	Oct 10 18:21:07 default-k8s-diff-port-821769 crio[776]: time="2025-10-10T18:21:07.718800993Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 10 18:21:09 default-k8s-diff-port-821769 crio[776]: time="2025-10-10T18:21:09.787605311Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=ef6e1f55-3e9e-4324-8a46-17c082c1fbf9 name=/runtime.v1.ImageService/PullImage
	Oct 10 18:21:09 default-k8s-diff-port-821769 crio[776]: time="2025-10-10T18:21:09.788464024Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=389934d8-0ef9-4c8a-8405-4a43e0ead23d name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:21:09 default-k8s-diff-port-821769 crio[776]: time="2025-10-10T18:21:09.789831802Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=720b8aed-85f6-4066-80fb-4ea2ff98106c name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:21:09 default-k8s-diff-port-821769 crio[776]: time="2025-10-10T18:21:09.793340999Z" level=info msg="Creating container: default/busybox/busybox" id=b8edb58b-797b-4142-abc4-5881f69d4997 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:21:09 default-k8s-diff-port-821769 crio[776]: time="2025-10-10T18:21:09.793973873Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:09 default-k8s-diff-port-821769 crio[776]: time="2025-10-10T18:21:09.797464794Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:09 default-k8s-diff-port-821769 crio[776]: time="2025-10-10T18:21:09.797888046Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:09 default-k8s-diff-port-821769 crio[776]: time="2025-10-10T18:21:09.81870545Z" level=info msg="Created container c146dbb77fa101353e93176237b5154b6e46e8e5d6743de7d14871a803167b40: default/busybox/busybox" id=b8edb58b-797b-4142-abc4-5881f69d4997 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:21:09 default-k8s-diff-port-821769 crio[776]: time="2025-10-10T18:21:09.819369273Z" level=info msg="Starting container: c146dbb77fa101353e93176237b5154b6e46e8e5d6743de7d14871a803167b40" id=6560362c-41d0-463e-93df-3b122eb2ac87 name=/runtime.v1.RuntimeService/StartContainer
	Oct 10 18:21:09 default-k8s-diff-port-821769 crio[776]: time="2025-10-10T18:21:09.821036335Z" level=info msg="Started container" PID=1937 containerID=c146dbb77fa101353e93176237b5154b6e46e8e5d6743de7d14871a803167b40 description=default/busybox/busybox id=6560362c-41d0-463e-93df-3b122eb2ac87 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cefa34c6e6b10c6fd2f403c3d636a106594f389f0176cc4fc4e5408df69a2029
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	c146dbb77fa10       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   6 seconds ago       Running             busybox                   0                   cefa34c6e6b10       busybox                                                default
	69579c33953ca       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   272ba5e945caf       coredns-66bc5c9577-wrz5v                               kube-system
	c3e2933bf4c67       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   47ecc26d01db4       storage-provisioner                                    kube-system
	7294c140ca15c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   1c4e537f2389b       kube-proxy-h2mzf                                       kube-system
	677c7a8fc084f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   4072aa71e7c5e       kindnet-4w475                                          kube-system
	96b3e0a6cddf0       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      33 seconds ago      Running             kube-controller-manager   0                   edd35b51b260c       kube-controller-manager-default-k8s-diff-port-821769   kube-system
	ed6e3b85ebabf       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      33 seconds ago      Running             kube-apiserver            0                   e5a56cb2dfe8e       kube-apiserver-default-k8s-diff-port-821769            kube-system
	73fe24d2cf54b       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      33 seconds ago      Running             kube-scheduler            0                   eb3bd8f1761bf       kube-scheduler-default-k8s-diff-port-821769            kube-system
	781fedd9ff3ef       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      33 seconds ago      Running             etcd                      0                   db6e46baadaaf       etcd-default-k8s-diff-port-821769                      kube-system
	
	
	==> coredns [69579c33953caf22566162e86e11627c7bd2b22ed5fd2277770284cd04b19661] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34449 - 21933 "HINFO IN 6050759139437484355.3445597608063171965. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.025181474s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-821769
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-821769
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad692bf4ab89f0e135b80e730ae25010479ecc46
	                    minikube.k8s.io/name=default-k8s-diff-port-821769
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_10T18_20_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 10 Oct 2025 18:20:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-821769
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 10 Oct 2025 18:21:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 10 Oct 2025 18:21:04 +0000   Fri, 10 Oct 2025 18:20:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 10 Oct 2025 18:21:04 +0000   Fri, 10 Oct 2025 18:20:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 10 Oct 2025 18:21:04 +0000   Fri, 10 Oct 2025 18:20:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 10 Oct 2025 18:21:04 +0000   Fri, 10 Oct 2025 18:21:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-821769
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 6694834041ede3e9eb1b67e168e90e0c
	  System UUID:                41d605da-1886-46ad-9ac8-df71dd2b8693
	  Boot ID:                    830c8438-99e6-48ba-b543-66e651cad0c8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-wrz5v                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-default-k8s-diff-port-821769                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-4w475                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-default-k8s-diff-port-821769             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-821769    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-h2mzf                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-default-k8s-diff-port-821769             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node default-k8s-diff-port-821769 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node default-k8s-diff-port-821769 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node default-k8s-diff-port-821769 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s   node-controller  Node default-k8s-diff-port-821769 event: Registered Node default-k8s-diff-port-821769 in Controller
	  Normal  NodeReady                12s   kubelet          Node default-k8s-diff-port-821769 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 95 0c 3e 92 2e 08 06
	[  +0.052845] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 a5 06 76 2d e3 08 06
	[ +11.354316] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff fa c6 ff 04 55 d6 08 06
	[  +7.101927] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e6 9b 73 27 8c 80 08 06
	[  +0.000350] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 a5 06 76 2d e3 08 06
	[  +6.287191] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 27 2d 28 d6 46 08 06
	[  +0.000293] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa c6 ff 04 55 d6 08 06
	[Oct10 18:19] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 8c 22 f6 6b cf 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e 29 bf 13 20 f9 08 06
	[ +15.511156] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e d6 74 aa 27 d0 08 06
	[  +0.008495] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 af 05 d4 db d1 08 06
	[Oct10 18:20] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 0b 54 33 52 4e 08 06
	[  +0.000597] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 af 05 d4 db d1 08 06
	
	
	==> etcd [781fedd9ff3ef0baad494ef7043642ea3ece3e20053548fce2a07e5cbdde840a] <==
	{"level":"warn","ts":"2025-10-10T18:20:43.882673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:43.891583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:43.903779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:43.912337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:43.920270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:43.928685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:43.937228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:43.945444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:43.953609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:43.961507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:43.971654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:43.978288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:43.986395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:43.994101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:44.003478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:44.011277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:44.020182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:44.028562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:44.040384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:44.048288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:44.055548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49732","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-10T18:21:05.240137Z","caller":"traceutil/trace.go:172","msg":"trace[875128958] transaction","detail":"{read_only:false; response_revision:411; number_of_response:1; }","duration":"129.999155ms","start":"2025-10-10T18:21:05.110116Z","end":"2025-10-10T18:21:05.240115Z","steps":["trace[875128958] 'process raft request'  (duration: 127.544285ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-10T18:21:05.249194Z","caller":"traceutil/trace.go:172","msg":"trace[2125337026] transaction","detail":"{read_only:false; response_revision:413; number_of_response:1; }","duration":"132.710962ms","start":"2025-10-10T18:21:05.116461Z","end":"2025-10-10T18:21:05.249172Z","steps":["trace[2125337026] 'process raft request'  (duration: 132.565136ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-10T18:21:05.249203Z","caller":"traceutil/trace.go:172","msg":"trace[192504869] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"138.437827ms","start":"2025-10-10T18:21:05.110746Z","end":"2025-10-10T18:21:05.249184Z","steps":["trace[192504869] 'process raft request'  (duration: 138.173263ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-10T18:21:05.473897Z","caller":"traceutil/trace.go:172","msg":"trace[1118118212] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"165.888684ms","start":"2025-10-10T18:21:05.307981Z","end":"2025-10-10T18:21:05.473869Z","steps":["trace[1118118212] 'process raft request'  (duration: 135.13649ms)","trace[1118118212] 'compare'  (duration: 30.443035ms)"],"step_count":2}
	
	
	==> kernel <==
	 18:21:16 up  1:03,  0 user,  load average: 6.03, 4.70, 2.93
	Linux default-k8s-diff-port-821769 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [677c7a8fc084f059fd2b3cf2072cafdaf43c6ba51ddc796bd8da7ca85f3e4015] <==
	I1010 18:20:53.696095       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1010 18:20:53.696333       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1010 18:20:53.696459       1 main.go:148] setting mtu 1500 for CNI 
	I1010 18:20:53.696537       1 main.go:178] kindnetd IP family: "ipv4"
	I1010 18:20:53.696575       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-10T18:20:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1010 18:20:53.903544       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1010 18:20:53.903587       1 controller.go:381] "Waiting for informer caches to sync"
	I1010 18:20:53.903606       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1010 18:20:54.100423       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1010 18:20:54.296106       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1010 18:20:54.296195       1 metrics.go:72] Registering metrics
	I1010 18:20:54.296278       1 controller.go:711] "Syncing nftables rules"
	I1010 18:21:03.905150       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1010 18:21:03.905258       1 main.go:301] handling current node
	I1010 18:21:13.906166       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1010 18:21:13.906214       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ed6e3b85ebabf936abc6bdb0e2f7c8fe7f8d57436dc1341d830c9976f21cc5ed] <==
	I1010 18:20:44.644402       1 controller.go:667] quota admission added evaluator for: namespaces
	I1010 18:20:44.647331       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1010 18:20:44.649287       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1010 18:20:44.653927       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1010 18:20:44.654692       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1010 18:20:44.663305       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1010 18:20:44.666347       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1010 18:20:45.547293       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1010 18:20:45.551174       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1010 18:20:45.551190       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1010 18:20:46.062289       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1010 18:20:46.102190       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1010 18:20:46.150925       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1010 18:20:46.156939       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1010 18:20:46.157986       1 controller.go:667] quota admission added evaluator for: endpoints
	I1010 18:20:46.162000       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1010 18:20:47.137797       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1010 18:20:47.158865       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1010 18:20:47.168140       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1010 18:20:47.178712       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1010 18:20:52.323177       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1010 18:20:52.354665       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1010 18:20:52.904711       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1010 18:20:53.040573       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1010 18:21:15.481971       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8444->192.168.103.1:60230: use of closed network connection
	
	
	==> kube-controller-manager [96b3e0a6cddf01ca0a8ddb6e35c4558ec7181ba1dd5a2920668e19fff9b93c79] <==
	I1010 18:20:52.235283       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1010 18:20:52.232408       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1010 18:20:52.235598       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1010 18:20:52.236944       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1010 18:20:52.238379       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1010 18:20:52.240497       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1010 18:20:52.243217       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1010 18:20:52.244280       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1010 18:20:52.249743       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1010 18:20:52.255165       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1010 18:20:52.255360       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-821769"
	I1010 18:20:52.256259       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1010 18:20:52.262374       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1010 18:20:52.265657       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1010 18:20:52.282946       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1010 18:20:52.283089       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1010 18:20:52.283106       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1010 18:20:52.283114       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1010 18:20:52.283221       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1010 18:20:52.282946       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1010 18:20:52.284104       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1010 18:20:52.285041       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1010 18:20:52.285659       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1010 18:20:52.286939       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1010 18:21:07.259154       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [7294c140ca15c50db1575f6dff699f271354e65f6c4bf4ba2c809c2dea69735e] <==
	I1010 18:20:53.533360       1 server_linux.go:53] "Using iptables proxy"
	I1010 18:20:53.602703       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1010 18:20:53.704516       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1010 18:20:53.704575       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1010 18:20:53.704673       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1010 18:20:53.728711       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1010 18:20:53.728843       1 server_linux.go:132] "Using iptables Proxier"
	I1010 18:20:53.739873       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1010 18:20:53.740450       1 server.go:527] "Version info" version="v1.34.1"
	I1010 18:20:53.740534       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 18:20:53.748107       1 config.go:200] "Starting service config controller"
	I1010 18:20:53.748132       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1010 18:20:53.748399       1 config.go:106] "Starting endpoint slice config controller"
	I1010 18:20:53.748419       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1010 18:20:53.748424       1 config.go:403] "Starting serviceCIDR config controller"
	I1010 18:20:53.748441       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1010 18:20:53.748453       1 config.go:309] "Starting node config controller"
	I1010 18:20:53.748572       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1010 18:20:53.748751       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1010 18:20:53.849222       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1010 18:20:53.849326       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1010 18:20:53.849540       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [73fe24d2cf54bea7cd804388f3cd75ffdecc2b187e95a939313fcc00c8887ee0] <==
	E1010 18:20:44.596882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1010 18:20:44.596938       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1010 18:20:44.596992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1010 18:20:44.597046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1010 18:20:44.597361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1010 18:20:44.597533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1010 18:20:44.597380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1010 18:20:44.597407       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1010 18:20:44.597444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1010 18:20:44.597493       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1010 18:20:44.597521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1010 18:20:44.597395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1010 18:20:45.420262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1010 18:20:45.462950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1010 18:20:45.544651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1010 18:20:45.671437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1010 18:20:45.679721       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1010 18:20:45.709312       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1010 18:20:45.712523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1010 18:20:45.819871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1010 18:20:45.835169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1010 18:20:45.839304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1010 18:20:45.860553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1010 18:20:46.093022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1010 18:20:48.292256       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 10 18:20:48 default-k8s-diff-port-821769 kubelet[1340]: I1010 18:20:48.060779    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-821769" podStartSLOduration=1.060754731 podStartE2EDuration="1.060754731s" podCreationTimestamp="2025-10-10 18:20:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:20:48.050852673 +0000 UTC m=+1.138011160" watchObservedRunningTime="2025-10-10 18:20:48.060754731 +0000 UTC m=+1.147913218"
	Oct 10 18:20:48 default-k8s-diff-port-821769 kubelet[1340]: I1010 18:20:48.073831    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-821769" podStartSLOduration=1.073811559 podStartE2EDuration="1.073811559s" podCreationTimestamp="2025-10-10 18:20:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:20:48.062018834 +0000 UTC m=+1.149177319" watchObservedRunningTime="2025-10-10 18:20:48.073811559 +0000 UTC m=+1.160970046"
	Oct 10 18:20:48 default-k8s-diff-port-821769 kubelet[1340]: I1010 18:20:48.085494    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-821769" podStartSLOduration=1.085474158 podStartE2EDuration="1.085474158s" podCreationTimestamp="2025-10-10 18:20:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:20:48.085117063 +0000 UTC m=+1.172275540" watchObservedRunningTime="2025-10-10 18:20:48.085474158 +0000 UTC m=+1.172632645"
	Oct 10 18:20:48 default-k8s-diff-port-821769 kubelet[1340]: I1010 18:20:48.085939    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-821769" podStartSLOduration=3.085758249 podStartE2EDuration="3.085758249s" podCreationTimestamp="2025-10-10 18:20:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:20:48.073801996 +0000 UTC m=+1.160960484" watchObservedRunningTime="2025-10-10 18:20:48.085758249 +0000 UTC m=+1.172916737"
	Oct 10 18:20:52 default-k8s-diff-port-821769 kubelet[1340]: I1010 18:20:52.286884    1340 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 10 18:20:52 default-k8s-diff-port-821769 kubelet[1340]: I1010 18:20:52.291416    1340 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 10 18:20:53 default-k8s-diff-port-821769 kubelet[1340]: I1010 18:20:53.129811    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0598db95-c0fc-49b8-a15b-26e4f96ed49c-lib-modules\") pod \"kube-proxy-h2mzf\" (UID: \"0598db95-c0fc-49b8-a15b-26e4f96ed49c\") " pod="kube-system/kube-proxy-h2mzf"
	Oct 10 18:20:53 default-k8s-diff-port-821769 kubelet[1340]: I1010 18:20:53.130183    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpt6n\" (UniqueName: \"kubernetes.io/projected/f4b100ab-44a4-49d1-bae7-d7dbdd293a80-kube-api-access-zpt6n\") pod \"kindnet-4w475\" (UID: \"f4b100ab-44a4-49d1-bae7-d7dbdd293a80\") " pod="kube-system/kindnet-4w475"
	Oct 10 18:20:53 default-k8s-diff-port-821769 kubelet[1340]: I1010 18:20:53.130899    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0598db95-c0fc-49b8-a15b-26e4f96ed49c-kube-proxy\") pod \"kube-proxy-h2mzf\" (UID: \"0598db95-c0fc-49b8-a15b-26e4f96ed49c\") " pod="kube-system/kube-proxy-h2mzf"
	Oct 10 18:20:53 default-k8s-diff-port-821769 kubelet[1340]: I1010 18:20:53.130952    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdntl\" (UniqueName: \"kubernetes.io/projected/0598db95-c0fc-49b8-a15b-26e4f96ed49c-kube-api-access-hdntl\") pod \"kube-proxy-h2mzf\" (UID: \"0598db95-c0fc-49b8-a15b-26e4f96ed49c\") " pod="kube-system/kube-proxy-h2mzf"
	Oct 10 18:20:53 default-k8s-diff-port-821769 kubelet[1340]: I1010 18:20:53.130982    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f4b100ab-44a4-49d1-bae7-d7dbdd293a80-cni-cfg\") pod \"kindnet-4w475\" (UID: \"f4b100ab-44a4-49d1-bae7-d7dbdd293a80\") " pod="kube-system/kindnet-4w475"
	Oct 10 18:20:53 default-k8s-diff-port-821769 kubelet[1340]: I1010 18:20:53.131006    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4b100ab-44a4-49d1-bae7-d7dbdd293a80-lib-modules\") pod \"kindnet-4w475\" (UID: \"f4b100ab-44a4-49d1-bae7-d7dbdd293a80\") " pod="kube-system/kindnet-4w475"
	Oct 10 18:20:53 default-k8s-diff-port-821769 kubelet[1340]: I1010 18:20:53.131031    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0598db95-c0fc-49b8-a15b-26e4f96ed49c-xtables-lock\") pod \"kube-proxy-h2mzf\" (UID: \"0598db95-c0fc-49b8-a15b-26e4f96ed49c\") " pod="kube-system/kube-proxy-h2mzf"
	Oct 10 18:20:53 default-k8s-diff-port-821769 kubelet[1340]: I1010 18:20:53.131062    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4b100ab-44a4-49d1-bae7-d7dbdd293a80-xtables-lock\") pod \"kindnet-4w475\" (UID: \"f4b100ab-44a4-49d1-bae7-d7dbdd293a80\") " pod="kube-system/kindnet-4w475"
	Oct 10 18:20:54 default-k8s-diff-port-821769 kubelet[1340]: I1010 18:20:54.067699    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h2mzf" podStartSLOduration=1.067673752 podStartE2EDuration="1.067673752s" podCreationTimestamp="2025-10-10 18:20:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:20:54.053948913 +0000 UTC m=+7.141107401" watchObservedRunningTime="2025-10-10 18:20:54.067673752 +0000 UTC m=+7.154832239"
	Oct 10 18:20:57 default-k8s-diff-port-821769 kubelet[1340]: I1010 18:20:57.005740    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-4w475" podStartSLOduration=4.00571304 podStartE2EDuration="4.00571304s" podCreationTimestamp="2025-10-10 18:20:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:20:54.068129873 +0000 UTC m=+7.155288361" watchObservedRunningTime="2025-10-10 18:20:57.00571304 +0000 UTC m=+10.092871701"
	Oct 10 18:21:04 default-k8s-diff-port-821769 kubelet[1340]: I1010 18:21:04.213986    1340 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 10 18:21:04 default-k8s-diff-port-821769 kubelet[1340]: I1010 18:21:04.316239    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/63ba31a4-0bea-47b8-92f4-453fa7d83aea-tmp\") pod \"storage-provisioner\" (UID: \"63ba31a4-0bea-47b8-92f4-453fa7d83aea\") " pod="kube-system/storage-provisioner"
	Oct 10 18:21:04 default-k8s-diff-port-821769 kubelet[1340]: I1010 18:21:04.316316    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr5mf\" (UniqueName: \"kubernetes.io/projected/63ba31a4-0bea-47b8-92f4-453fa7d83aea-kube-api-access-zr5mf\") pod \"storage-provisioner\" (UID: \"63ba31a4-0bea-47b8-92f4-453fa7d83aea\") " pod="kube-system/storage-provisioner"
	Oct 10 18:21:04 default-k8s-diff-port-821769 kubelet[1340]: I1010 18:21:04.316359    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a6485d8-d7c2-4cdc-a015-68b7754aa396-config-volume\") pod \"coredns-66bc5c9577-wrz5v\" (UID: \"7a6485d8-d7c2-4cdc-a015-68b7754aa396\") " pod="kube-system/coredns-66bc5c9577-wrz5v"
	Oct 10 18:21:04 default-k8s-diff-port-821769 kubelet[1340]: I1010 18:21:04.316393    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q7vk\" (UniqueName: \"kubernetes.io/projected/7a6485d8-d7c2-4cdc-a015-68b7754aa396-kube-api-access-8q7vk\") pod \"coredns-66bc5c9577-wrz5v\" (UID: \"7a6485d8-d7c2-4cdc-a015-68b7754aa396\") " pod="kube-system/coredns-66bc5c9577-wrz5v"
	Oct 10 18:21:05 default-k8s-diff-port-821769 kubelet[1340]: I1010 18:21:05.251067    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wrz5v" podStartSLOduration=12.251028149 podStartE2EDuration="12.251028149s" podCreationTimestamp="2025-10-10 18:20:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:21:05.106120637 +0000 UTC m=+18.193279126" watchObservedRunningTime="2025-10-10 18:21:05.251028149 +0000 UTC m=+18.338186653"
	Oct 10 18:21:05 default-k8s-diff-port-821769 kubelet[1340]: I1010 18:21:05.251317    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.251303361 podStartE2EDuration="12.251303361s" podCreationTimestamp="2025-10-10 18:20:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:21:05.250963382 +0000 UTC m=+18.338121868" watchObservedRunningTime="2025-10-10 18:21:05.251303361 +0000 UTC m=+18.338461841"
	Oct 10 18:21:07 default-k8s-diff-port-821769 kubelet[1340]: I1010 18:21:07.435397    1340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgzpm\" (UniqueName: \"kubernetes.io/projected/2756f9b0-fbc0-4e80-9636-d7ae1972908b-kube-api-access-qgzpm\") pod \"busybox\" (UID: \"2756f9b0-fbc0-4e80-9636-d7ae1972908b\") " pod="default/busybox"
	Oct 10 18:21:10 default-k8s-diff-port-821769 kubelet[1340]: I1010 18:21:10.095902    1340 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.023084913 podStartE2EDuration="3.095877867s" podCreationTimestamp="2025-10-10 18:21:07 +0000 UTC" firstStartedPulling="2025-10-10 18:21:07.716517182 +0000 UTC m=+20.803675660" lastFinishedPulling="2025-10-10 18:21:09.78931014 +0000 UTC m=+22.876468614" observedRunningTime="2025-10-10 18:21:10.095878308 +0000 UTC m=+23.183036797" watchObservedRunningTime="2025-10-10 18:21:10.095877867 +0000 UTC m=+23.183036354"
	
	
	==> storage-provisioner [c3e2933bf4c677840db0c29952930d1271d1f3e52f67be8b102ea1005a5a1b37] <==
	I1010 18:21:04.618460       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1010 18:21:04.636381       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1010 18:21:04.649541       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1010 18:21:04.653762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:04.660511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1010 18:21:04.660840       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1010 18:21:04.661045       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"acf2fe9b-472b-4115-89d9-0092fd7e1fc6", APIVersion:"v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-821769_8a1996f3-b576-40cb-bb48-2c59da9cdaec became leader
	I1010 18:21:04.661208       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-821769_8a1996f3-b576-40cb-bb48-2c59da9cdaec!
	W1010 18:21:04.665568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:04.671388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1010 18:21:04.761752       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-821769_8a1996f3-b576-40cb-bb48-2c59da9cdaec!
	W1010 18:21:06.675537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:06.680260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:08.683596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:08.687464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:10.691244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:10.695747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:12.699393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:12.703395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:14.707321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:14.712495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:16.716399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:16.720247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-821769 -n default-k8s-diff-port-821769
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-821769 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-141193 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-141193 --alsologtostderr -v=1: exit status 80 (2.095983316s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-141193 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 18:21:22.657655  322397 out.go:360] Setting OutFile to fd 1 ...
	I1010 18:21:22.657786  322397 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:21:22.657798  322397 out.go:374] Setting ErrFile to fd 2...
	I1010 18:21:22.657805  322397 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:21:22.658073  322397 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 18:21:22.658320  322397 out.go:368] Setting JSON to false
	I1010 18:21:22.658363  322397 mustload.go:65] Loading cluster: old-k8s-version-141193
	I1010 18:21:22.658712  322397 config.go:182] Loaded profile config "old-k8s-version-141193": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1010 18:21:22.659148  322397 cli_runner.go:164] Run: docker container inspect old-k8s-version-141193 --format={{.State.Status}}
	I1010 18:21:22.677528  322397 host.go:66] Checking if "old-k8s-version-141193" exists ...
	I1010 18:21:22.677786  322397 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:21:22.739178  322397 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:87 OomKillDisable:false NGoroutines:93 SystemTime:2025-10-10 18:21:22.729061552 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:21:22.739788  322397 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-141193 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1010 18:21:22.741652  322397 out.go:179] * Pausing node old-k8s-version-141193 ... 
	I1010 18:21:22.742856  322397 host.go:66] Checking if "old-k8s-version-141193" exists ...
	I1010 18:21:22.743108  322397 ssh_runner.go:195] Run: systemctl --version
	I1010 18:21:22.743144  322397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-141193
	I1010 18:21:22.760115  322397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/old-k8s-version-141193/id_rsa Username:docker}
	I1010 18:21:22.855815  322397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:21:22.869023  322397 pause.go:52] kubelet running: true
	I1010 18:21:22.869097  322397 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1010 18:21:23.051154  322397 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1010 18:21:23.051234  322397 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1010 18:21:23.120150  322397 cri.go:89] found id: "de2790671165db40044a25122f17100a12947ed8065bf6e2ed1ff37219e247dd"
	I1010 18:21:23.120180  322397 cri.go:89] found id: "76851e857de85c1d61246f777900d9a4581fca45808f5b980f367404d0d69f55"
	I1010 18:21:23.120194  322397 cri.go:89] found id: "17dc2d6edfc14bbc3aad59599c1fe778e3325320e2e82a8580a705cf10bd89fe"
	I1010 18:21:23.120198  322397 cri.go:89] found id: "f0a141878e079b9bef80d8c836ead2aaa0e5e6f6923e15d06ab08325251c3ff9"
	I1010 18:21:23.120201  322397 cri.go:89] found id: "194d18ca204baa8431464117f4490a32c01a38dcdc5e3a8e68285f79bd382765"
	I1010 18:21:23.120204  322397 cri.go:89] found id: "35c22fae38401c52658935667354e9d6d1ec78136964aab98a72bf3ef5eb768f"
	I1010 18:21:23.120207  322397 cri.go:89] found id: "40a7654c69d62a8d95b0f35cd0690ed73e1fdcfe1ca6c15bbfe41a93f8101259"
	I1010 18:21:23.120209  322397 cri.go:89] found id: "fd2510c67a2437bd698c9b5bc34c054b544522802f65bf2ffc6d09e1b707e52f"
	I1010 18:21:23.120211  322397 cri.go:89] found id: "3757d2bd727229dd68d4be360086d9271d28f5c098b84264b16d8e9b1794093f"
	I1010 18:21:23.120217  322397 cri.go:89] found id: "1667847f042344bbbf08942c8b74a2e8385d5dfaf27c738cc310c23092d32a3d"
	I1010 18:21:23.120219  322397 cri.go:89] found id: "7b7c62874a1a37307babd4ba819091e951bc357eb79ac3fa62cffe33dbb78e22"
	I1010 18:21:23.120222  322397 cri.go:89] found id: ""
	I1010 18:21:23.120256  322397 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 18:21:23.133022  322397 retry.go:31] will retry after 185.256629ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:21:23Z" level=error msg="open /run/runc: no such file or directory"
	I1010 18:21:23.319517  322397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:21:23.334553  322397 pause.go:52] kubelet running: false
	I1010 18:21:23.334630  322397 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1010 18:21:23.486077  322397 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1010 18:21:23.486180  322397 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1010 18:21:23.553332  322397 cri.go:89] found id: "de2790671165db40044a25122f17100a12947ed8065bf6e2ed1ff37219e247dd"
	I1010 18:21:23.553361  322397 cri.go:89] found id: "76851e857de85c1d61246f777900d9a4581fca45808f5b980f367404d0d69f55"
	I1010 18:21:23.553367  322397 cri.go:89] found id: "17dc2d6edfc14bbc3aad59599c1fe778e3325320e2e82a8580a705cf10bd89fe"
	I1010 18:21:23.553372  322397 cri.go:89] found id: "f0a141878e079b9bef80d8c836ead2aaa0e5e6f6923e15d06ab08325251c3ff9"
	I1010 18:21:23.553376  322397 cri.go:89] found id: "194d18ca204baa8431464117f4490a32c01a38dcdc5e3a8e68285f79bd382765"
	I1010 18:21:23.553381  322397 cri.go:89] found id: "35c22fae38401c52658935667354e9d6d1ec78136964aab98a72bf3ef5eb768f"
	I1010 18:21:23.553385  322397 cri.go:89] found id: "40a7654c69d62a8d95b0f35cd0690ed73e1fdcfe1ca6c15bbfe41a93f8101259"
	I1010 18:21:23.553389  322397 cri.go:89] found id: "fd2510c67a2437bd698c9b5bc34c054b544522802f65bf2ffc6d09e1b707e52f"
	I1010 18:21:23.553394  322397 cri.go:89] found id: "3757d2bd727229dd68d4be360086d9271d28f5c098b84264b16d8e9b1794093f"
	I1010 18:21:23.553402  322397 cri.go:89] found id: "1667847f042344bbbf08942c8b74a2e8385d5dfaf27c738cc310c23092d32a3d"
	I1010 18:21:23.553406  322397 cri.go:89] found id: "7b7c62874a1a37307babd4ba819091e951bc357eb79ac3fa62cffe33dbb78e22"
	I1010 18:21:23.553411  322397 cri.go:89] found id: ""
	I1010 18:21:23.553459  322397 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 18:21:23.566831  322397 retry.go:31] will retry after 359.460849ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:21:23Z" level=error msg="open /run/runc: no such file or directory"
	I1010 18:21:23.927268  322397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:21:23.944331  322397 pause.go:52] kubelet running: false
	I1010 18:21:23.944390  322397 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1010 18:21:24.084534  322397 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1010 18:21:24.084639  322397 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1010 18:21:24.152280  322397 cri.go:89] found id: "de2790671165db40044a25122f17100a12947ed8065bf6e2ed1ff37219e247dd"
	I1010 18:21:24.152299  322397 cri.go:89] found id: "76851e857de85c1d61246f777900d9a4581fca45808f5b980f367404d0d69f55"
	I1010 18:21:24.152303  322397 cri.go:89] found id: "17dc2d6edfc14bbc3aad59599c1fe778e3325320e2e82a8580a705cf10bd89fe"
	I1010 18:21:24.152306  322397 cri.go:89] found id: "f0a141878e079b9bef80d8c836ead2aaa0e5e6f6923e15d06ab08325251c3ff9"
	I1010 18:21:24.152309  322397 cri.go:89] found id: "194d18ca204baa8431464117f4490a32c01a38dcdc5e3a8e68285f79bd382765"
	I1010 18:21:24.152313  322397 cri.go:89] found id: "35c22fae38401c52658935667354e9d6d1ec78136964aab98a72bf3ef5eb768f"
	I1010 18:21:24.152315  322397 cri.go:89] found id: "40a7654c69d62a8d95b0f35cd0690ed73e1fdcfe1ca6c15bbfe41a93f8101259"
	I1010 18:21:24.152318  322397 cri.go:89] found id: "fd2510c67a2437bd698c9b5bc34c054b544522802f65bf2ffc6d09e1b707e52f"
	I1010 18:21:24.152321  322397 cri.go:89] found id: "3757d2bd727229dd68d4be360086d9271d28f5c098b84264b16d8e9b1794093f"
	I1010 18:21:24.152334  322397 cri.go:89] found id: "1667847f042344bbbf08942c8b74a2e8385d5dfaf27c738cc310c23092d32a3d"
	I1010 18:21:24.152339  322397 cri.go:89] found id: "7b7c62874a1a37307babd4ba819091e951bc357eb79ac3fa62cffe33dbb78e22"
	I1010 18:21:24.152342  322397 cri.go:89] found id: ""
	I1010 18:21:24.152384  322397 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 18:21:24.164753  322397 retry.go:31] will retry after 285.575155ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:21:24Z" level=error msg="open /run/runc: no such file or directory"
	I1010 18:21:24.451282  322397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:21:24.465318  322397 pause.go:52] kubelet running: false
	I1010 18:21:24.465372  322397 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1010 18:21:24.616998  322397 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1010 18:21:24.617106  322397 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1010 18:21:24.683582  322397 cri.go:89] found id: "de2790671165db40044a25122f17100a12947ed8065bf6e2ed1ff37219e247dd"
	I1010 18:21:24.683600  322397 cri.go:89] found id: "76851e857de85c1d61246f777900d9a4581fca45808f5b980f367404d0d69f55"
	I1010 18:21:24.683603  322397 cri.go:89] found id: "17dc2d6edfc14bbc3aad59599c1fe778e3325320e2e82a8580a705cf10bd89fe"
	I1010 18:21:24.683607  322397 cri.go:89] found id: "f0a141878e079b9bef80d8c836ead2aaa0e5e6f6923e15d06ab08325251c3ff9"
	I1010 18:21:24.683610  322397 cri.go:89] found id: "194d18ca204baa8431464117f4490a32c01a38dcdc5e3a8e68285f79bd382765"
	I1010 18:21:24.683613  322397 cri.go:89] found id: "35c22fae38401c52658935667354e9d6d1ec78136964aab98a72bf3ef5eb768f"
	I1010 18:21:24.683616  322397 cri.go:89] found id: "40a7654c69d62a8d95b0f35cd0690ed73e1fdcfe1ca6c15bbfe41a93f8101259"
	I1010 18:21:24.683618  322397 cri.go:89] found id: "fd2510c67a2437bd698c9b5bc34c054b544522802f65bf2ffc6d09e1b707e52f"
	I1010 18:21:24.683621  322397 cri.go:89] found id: "3757d2bd727229dd68d4be360086d9271d28f5c098b84264b16d8e9b1794093f"
	I1010 18:21:24.683626  322397 cri.go:89] found id: "1667847f042344bbbf08942c8b74a2e8385d5dfaf27c738cc310c23092d32a3d"
	I1010 18:21:24.683628  322397 cri.go:89] found id: "7b7c62874a1a37307babd4ba819091e951bc357eb79ac3fa62cffe33dbb78e22"
	I1010 18:21:24.683631  322397 cri.go:89] found id: ""
	I1010 18:21:24.683665  322397 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 18:21:24.698223  322397 out.go:203] 
	W1010 18:21:24.699488  322397 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:21:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:21:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1010 18:21:24.699505  322397 out.go:285] * 
	* 
	W1010 18:21:24.703708  322397 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 18:21:24.704862  322397 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-141193 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-141193
helpers_test.go:243: (dbg) docker inspect old-k8s-version-141193:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "00949309f427ea7a77c95f92174ed346e22a737fad21a99c854c9a40990c276c",
	        "Created": "2025-10-10T18:19:07.516278103Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 309448,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-10T18:20:24.185750468Z",
	            "FinishedAt": "2025-10-10T18:20:23.275982285Z"
	        },
	        "Image": "sha256:84da1fc78d37190122f56c520913b0bfc454516bc5fdbdc209e2a5258afce8c3",
	        "ResolvConfPath": "/var/lib/docker/containers/00949309f427ea7a77c95f92174ed346e22a737fad21a99c854c9a40990c276c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/00949309f427ea7a77c95f92174ed346e22a737fad21a99c854c9a40990c276c/hostname",
	        "HostsPath": "/var/lib/docker/containers/00949309f427ea7a77c95f92174ed346e22a737fad21a99c854c9a40990c276c/hosts",
	        "LogPath": "/var/lib/docker/containers/00949309f427ea7a77c95f92174ed346e22a737fad21a99c854c9a40990c276c/00949309f427ea7a77c95f92174ed346e22a737fad21a99c854c9a40990c276c-json.log",
	        "Name": "/old-k8s-version-141193",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-141193:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-141193",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "00949309f427ea7a77c95f92174ed346e22a737fad21a99c854c9a40990c276c",
	                "LowerDir": "/var/lib/docker/overlay2/8175d4c388af2a62328900c4de53ca564319b22d0194435beaabfec458b151c4-init/diff:/var/lib/docker/overlay2/9995a0af7efc4d83e8e62526a6cf13ffc5df3bab5cee59077c863040f7e3e58d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8175d4c388af2a62328900c4de53ca564319b22d0194435beaabfec458b151c4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8175d4c388af2a62328900c4de53ca564319b22d0194435beaabfec458b151c4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8175d4c388af2a62328900c4de53ca564319b22d0194435beaabfec458b151c4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-141193",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-141193/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-141193",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-141193",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-141193",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bdd64c654c8a73fceb2bbfc445295573b418d0eff045ad8a213a0d19c8e16534",
	            "SandboxKey": "/var/run/docker/netns/bdd64c654c8a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-141193": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:0f:0f:f5:95:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7dff4078001ce0edf8fdd80b625c94d6d211c5682186b40a040629dae3a3adf3",
	                    "EndpointID": "2345ce8c4c4e3ff80777f98944677a95dffc02178c837aae723fd948bbd999ca",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-141193",
	                        "00949309f427"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-141193 -n old-k8s-version-141193
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-141193 -n old-k8s-version-141193: exit status 2 (304.660922ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-141193 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-141193 logs -n 25: (1.108743273s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-078032 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ addons  │ enable metrics-server -p embed-certs-472518 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ ssh     │ -p bridge-078032 sudo containerd config dump                                                                                                                                                                                                  │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo crio config                                                                                                                                                                                                             │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ delete  │ -p bridge-078032                                                                                                                                                                                                                              │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-141193 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p old-k8s-version-141193 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:21 UTC │
	│ stop    │ -p embed-certs-472518 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ addons  │ enable metrics-server -p no-preload-556024 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ delete  │ -p disable-driver-mounts-523797                                                                                                                                                                                                               │ disable-driver-mounts-523797 │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p default-k8s-diff-port-821769 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:21 UTC │
	│ stop    │ -p no-preload-556024 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ addons  │ enable dashboard -p embed-certs-472518 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p embed-certs-472518 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-556024 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p no-preload-556024 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-821769 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-821769 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ image   │ old-k8s-version-141193 image list --format=json                                                                                                                                                                                               │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ pause   │ -p old-k8s-version-141193 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/10 18:20:43
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 18:20:43.446366  316039 out.go:360] Setting OutFile to fd 1 ...
	I1010 18:20:43.446643  316039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:20:43.446652  316039 out.go:374] Setting ErrFile to fd 2...
	I1010 18:20:43.446657  316039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:20:43.446905  316039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 18:20:43.447426  316039 out.go:368] Setting JSON to false
	I1010 18:20:43.448597  316039 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3783,"bootTime":1760116660,"procs":320,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 18:20:43.448694  316039 start.go:141] virtualization: kvm guest
	I1010 18:20:43.451659  316039 out.go:179] * [no-preload-556024] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1010 18:20:43.455280  316039 out.go:179]   - MINIKUBE_LOCATION=21724
	I1010 18:20:43.455310  316039 notify.go:220] Checking for updates...
	I1010 18:20:43.457194  316039 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 18:20:43.458229  316039 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:20:43.459338  316039 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-5815/.minikube
	I1010 18:20:43.460374  316039 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 18:20:43.461326  316039 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 18:20:43.462916  316039 config.go:182] Loaded profile config "no-preload-556024": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:20:43.463671  316039 driver.go:421] Setting default libvirt URI to qemu:///system
	I1010 18:20:43.494145  316039 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1010 18:20:43.494327  316039 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:20:43.575548  316039 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:79 SystemTime:2025-10-10 18:20:43.559967778 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:20:43.575688  316039 docker.go:318] overlay module found
	I1010 18:20:43.578025  316039 out.go:179] * Using the docker driver based on existing profile
	I1010 18:20:43.579242  316039 start.go:305] selected driver: docker
	I1010 18:20:43.579261  316039 start.go:925] validating driver "docker" against &{Name:no-preload-556024 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-556024 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:20:43.579415  316039 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 18:20:43.580194  316039 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:20:43.653363  316039 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:79 SystemTime:2025-10-10 18:20:43.64191346 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:20:43.653670  316039 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:20:43.653698  316039 cni.go:84] Creating CNI manager for ""
	I1010 18:20:43.653755  316039 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:20:43.653825  316039 start.go:349] cluster config:
	{Name:no-preload-556024 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-556024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:20:43.659998  316039 out.go:179] * Starting "no-preload-556024" primary control-plane node in "no-preload-556024" cluster
	I1010 18:20:43.661318  316039 cache.go:123] Beginning downloading kic base image for docker with crio
	I1010 18:20:43.662567  316039 out.go:179] * Pulling base image v0.0.48-1760103811-21724 ...
	I1010 18:20:43.663594  316039 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:20:43.663673  316039 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon
	I1010 18:20:43.663749  316039 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/config.json ...
	I1010 18:20:43.664143  316039 cache.go:107] acquiring lock: {Name:mkdface014b0b0c18e2529a8fc2cf742979f5f8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:20:43.664226  316039 cache.go:107] acquiring lock: {Name:mkd574c74807a65d6c1e08f0a6d292773ee4d51a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:20:43.664257  316039 cache.go:115] /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1010 18:20:43.664286  316039 cache.go:115] /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1010 18:20:43.664290  316039 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 149.274µs
	I1010 18:20:43.664294  316039 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 71.383µs
	I1010 18:20:43.664309  316039 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1010 18:20:43.664309  316039 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1010 18:20:43.664330  316039 cache.go:107] acquiring lock: {Name:mk6c1abc09453f5583a50c7348563cf680f08172 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:20:43.664353  316039 cache.go:107] acquiring lock: {Name:mk8a6cf34543e68ad996fdd3dfcc536ed23f13a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:20:43.664378  316039 cache.go:115] /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1010 18:20:43.664386  316039 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 58.29µs
	I1010 18:20:43.664398  316039 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1010 18:20:43.664401  316039 cache.go:115] /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1010 18:20:43.664414  316039 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 62.339µs
	I1010 18:20:43.664412  316039 cache.go:107] acquiring lock: {Name:mk589006dd1715c9cef02bfeb051e2a5fdd82d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:20:43.664423  316039 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1010 18:20:43.664435  316039 cache.go:107] acquiring lock: {Name:mk346c7b9277054f446ecd193d09cac2f17a13f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:20:43.664474  316039 cache.go:115] /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1010 18:20:43.664330  316039 cache.go:107] acquiring lock: {Name:mk43600d297347b2bd1ef8f04fef87e9e24d614a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:20:43.664560  316039 cache.go:115] /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1010 18:20:43.664579  316039 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 251.287µs
	I1010 18:20:43.664587  316039 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1010 18:20:43.664447  316039 cache.go:115] /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1010 18:20:43.664606  316039 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 195.132µs
	I1010 18:20:43.664619  316039 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1010 18:20:43.664143  316039 cache.go:107] acquiring lock: {Name:mk4f454812d4444d82ff12e1c427c98a877e5e2f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:20:43.664653  316039 cache.go:115] /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1010 18:20:43.664663  316039 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 550.696µs
	I1010 18:20:43.664673  316039 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1010 18:20:43.664483  316039 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 49.234µs
	I1010 18:20:43.664681  316039 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1010 18:20:43.664688  316039 cache.go:87] Successfully saved all images to host disk.
	I1010 18:20:43.689240  316039 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon, skipping pull
	I1010 18:20:43.689261  316039 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 exists in daemon, skipping load
	I1010 18:20:43.689283  316039 cache.go:232] Successfully downloaded all kic artifacts
	I1010 18:20:43.689321  316039 start.go:360] acquireMachinesLock for no-preload-556024: {Name:mk3ff552b11677088d4385d2ba43c142109fcf3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:20:43.689401  316039 start.go:364] duration metric: took 59.53µs to acquireMachinesLock for "no-preload-556024"
	I1010 18:20:43.689425  316039 start.go:96] Skipping create...Using existing machine configuration
	I1010 18:20:43.689435  316039 fix.go:54] fixHost starting: 
	I1010 18:20:43.689696  316039 cli_runner.go:164] Run: docker container inspect no-preload-556024 --format={{.State.Status}}
	I1010 18:20:43.716175  316039 fix.go:112] recreateIfNeeded on no-preload-556024: state=Stopped err=<nil>
	W1010 18:20:43.716210  316039 fix.go:138] unexpected machine state, will restart: <nil>
	W1010 18:20:39.530761  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	W1010 18:20:41.532918  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	W1010 18:20:43.534100  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	I1010 18:20:41.340446  310776 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 18:20:41.340555  310776 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 18:20:42.841252  310776 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500922644s
	I1010 18:20:42.844237  310776 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1010 18:20:42.844348  310776 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8444/livez
	I1010 18:20:42.844433  310776 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1010 18:20:42.844518  310776 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1010 18:20:44.598226  310776 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.753880491s
	I1010 18:20:45.121438  310776 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.277113545s
	I1010 18:20:46.346293  310776 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.502033618s
	I1010 18:20:46.357281  310776 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 18:20:46.366479  310776 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 18:20:46.375532  310776 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 18:20:46.375817  310776 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-821769 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 18:20:46.384299  310776 kubeadm.go:318] [bootstrap-token] Using token: gwvnud.yj4fhfjb9apke821
	I1010 18:20:42.077576  315243 out.go:252] * Restarting existing docker container for "embed-certs-472518" ...
	I1010 18:20:42.077652  315243 cli_runner.go:164] Run: docker start embed-certs-472518
	I1010 18:20:42.324899  315243 cli_runner.go:164] Run: docker container inspect embed-certs-472518 --format={{.State.Status}}
	I1010 18:20:42.344432  315243 kic.go:430] container "embed-certs-472518" state is running.
	I1010 18:20:42.344870  315243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-472518
	I1010 18:20:42.364868  315243 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/config.json ...
	I1010 18:20:42.365194  315243 machine.go:93] provisionDockerMachine start ...
	I1010 18:20:42.365274  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:42.384498  315243 main.go:141] libmachine: Using SSH client type: native
	I1010 18:20:42.384729  315243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1010 18:20:42.384743  315243 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 18:20:42.385417  315243 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54750->127.0.0.1:33113: read: connection reset by peer
	I1010 18:20:45.520224  315243 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-472518
	
	I1010 18:20:45.520254  315243 ubuntu.go:182] provisioning hostname "embed-certs-472518"
	I1010 18:20:45.520313  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:45.539008  315243 main.go:141] libmachine: Using SSH client type: native
	I1010 18:20:45.539308  315243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1010 18:20:45.539325  315243 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-472518 && echo "embed-certs-472518" | sudo tee /etc/hostname
	I1010 18:20:45.697980  315243 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-472518
	
	I1010 18:20:45.698066  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:45.719981  315243 main.go:141] libmachine: Using SSH client type: native
	I1010 18:20:45.720234  315243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1010 18:20:45.720267  315243 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-472518' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-472518/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-472518' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:20:45.864595  315243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:20:45.864632  315243 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-5815/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-5815/.minikube}
	I1010 18:20:45.864669  315243 ubuntu.go:190] setting up certificates
	I1010 18:20:45.864681  315243 provision.go:84] configureAuth start
	I1010 18:20:45.864752  315243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-472518
	I1010 18:20:45.886254  315243 provision.go:143] copyHostCerts
	I1010 18:20:45.886322  315243 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem, removing ...
	I1010 18:20:45.886336  315243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem
	I1010 18:20:45.886413  315243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem (1675 bytes)
	I1010 18:20:45.886551  315243 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem, removing ...
	I1010 18:20:45.886565  315243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem
	I1010 18:20:45.886615  315243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem (1082 bytes)
	I1010 18:20:45.886698  315243 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem, removing ...
	I1010 18:20:45.886709  315243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem
	I1010 18:20:45.886745  315243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem (1123 bytes)
	I1010 18:20:45.886812  315243 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem org=jenkins.embed-certs-472518 san=[127.0.0.1 192.168.94.2 embed-certs-472518 localhost minikube]
	I1010 18:20:46.271763  315243 provision.go:177] copyRemoteCerts
	I1010 18:20:46.271823  315243 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:20:46.271855  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:46.291521  315243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:20:46.392626  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1010 18:20:46.415271  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1010 18:20:46.434707  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 18:20:46.454219  315243 provision.go:87] duration metric: took 589.52001ms to configureAuth
	I1010 18:20:46.454244  315243 ubuntu.go:206] setting minikube options for container-runtime
	I1010 18:20:46.454427  315243 config.go:182] Loaded profile config "embed-certs-472518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:20:46.454546  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:46.473500  315243 main.go:141] libmachine: Using SSH client type: native
	I1010 18:20:46.473704  315243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1010 18:20:46.473721  315243 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:20:46.789031  315243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:20:46.789116  315243 machine.go:96] duration metric: took 4.423902548s to provisionDockerMachine
	I1010 18:20:46.789130  315243 start.go:293] postStartSetup for "embed-certs-472518" (driver="docker")
	I1010 18:20:46.789143  315243 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:20:46.789210  315243 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:20:46.789258  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:46.815152  315243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:20:46.385437  310776 out.go:252]   - Configuring RBAC rules ...
	I1010 18:20:46.385588  310776 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 18:20:46.389824  310776 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 18:20:46.394691  310776 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 18:20:46.397355  310776 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 18:20:46.399852  310776 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 18:20:46.402418  310776 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 18:20:46.752330  310776 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 18:20:47.169598  310776 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1010 18:20:47.752782  310776 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1010 18:20:47.754001  310776 kubeadm.go:318] 
	I1010 18:20:47.754109  310776 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1010 18:20:47.754123  310776 kubeadm.go:318] 
	I1010 18:20:47.754232  310776 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1010 18:20:47.754244  310776 kubeadm.go:318] 
	I1010 18:20:47.754289  310776 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1010 18:20:47.754398  310776 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 18:20:47.754483  310776 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 18:20:47.754492  310776 kubeadm.go:318] 
	I1010 18:20:47.754572  310776 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1010 18:20:47.754589  310776 kubeadm.go:318] 
	I1010 18:20:47.754658  310776 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 18:20:47.754668  310776 kubeadm.go:318] 
	I1010 18:20:47.754745  310776 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1010 18:20:47.754863  310776 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 18:20:47.754965  310776 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 18:20:47.755000  310776 kubeadm.go:318] 
	I1010 18:20:47.755138  310776 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 18:20:47.755249  310776 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1010 18:20:47.755261  310776 kubeadm.go:318] 
	I1010 18:20:47.755379  310776 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token gwvnud.yj4fhfjb9apke821 \
	I1010 18:20:47.755581  310776 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:08dcb68c3233bd2646103f50182dc3a0cc6156f6b69cb66c341f613324bcc71f \
	I1010 18:20:47.755622  310776 kubeadm.go:318] 	--control-plane 
	I1010 18:20:47.755633  310776 kubeadm.go:318] 
	I1010 18:20:47.755764  310776 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1010 18:20:47.755779  310776 kubeadm.go:318] 
	I1010 18:20:47.755902  310776 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token gwvnud.yj4fhfjb9apke821 \
	I1010 18:20:47.756083  310776 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:08dcb68c3233bd2646103f50182dc3a0cc6156f6b69cb66c341f613324bcc71f 
	I1010 18:20:47.759459  310776 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1010 18:20:47.759612  310776 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 18:20:47.759649  310776 cni.go:84] Creating CNI manager for ""
	I1010 18:20:47.759660  310776 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:20:47.761460  310776 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1010 18:20:46.914251  315243 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:20:46.918720  315243 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1010 18:20:46.918754  315243 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1010 18:20:46.918767  315243 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/addons for local assets ...
	I1010 18:20:46.918823  315243 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/files for local assets ...
	I1010 18:20:46.918934  315243 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem -> 93542.pem in /etc/ssl/certs
	I1010 18:20:46.919076  315243 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:20:46.928469  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:20:46.951615  315243 start.go:296] duration metric: took 162.458821ms for postStartSetup
	I1010 18:20:46.951700  315243 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1010 18:20:46.951744  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:46.972432  315243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:20:47.076364  315243 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1010 18:20:47.081264  315243 fix.go:56] duration metric: took 5.026311661s for fixHost
	I1010 18:20:47.081299  315243 start.go:83] releasing machines lock for "embed-certs-472518", held for 5.026378467s
	I1010 18:20:47.081380  315243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-472518
	I1010 18:20:47.100059  315243 ssh_runner.go:195] Run: cat /version.json
	I1010 18:20:47.100111  315243 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:20:47.100122  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:47.100174  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:47.122805  315243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:20:47.124141  315243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:20:47.227891  315243 ssh_runner.go:195] Run: systemctl --version
	I1010 18:20:47.299889  315243 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:20:47.336545  315243 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:20:47.341187  315243 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:20:47.341242  315243 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:20:47.350350  315243 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1010 18:20:47.350370  315243 start.go:495] detecting cgroup driver to use...
	I1010 18:20:47.350396  315243 detect.go:190] detected "systemd" cgroup driver on host os
	I1010 18:20:47.350445  315243 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:20:47.365413  315243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:20:47.379380  315243 docker.go:218] disabling cri-docker service (if available) ...
	I1010 18:20:47.379437  315243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:20:47.395098  315243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:20:47.409632  315243 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:20:47.495438  315243 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:20:47.584238  315243 docker.go:234] disabling docker service ...
	I1010 18:20:47.584305  315243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:20:47.600224  315243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:20:47.614516  315243 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:20:47.704010  315243 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:20:47.792697  315243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:20:47.808011  315243 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:20:47.826927  315243 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1010 18:20:47.826983  315243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:47.837633  315243 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1010 18:20:47.837698  315243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:47.848119  315243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:47.859624  315243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:47.870939  315243 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:20:47.882141  315243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:47.894494  315243 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:47.906184  315243 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:47.916671  315243 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:20:47.924923  315243 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:20:47.934175  315243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:20:48.032532  315243 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:20:48.213272  315243 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:20:48.213343  315243 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:20:48.217822  315243 start.go:563] Will wait 60s for crictl version
	I1010 18:20:48.217887  315243 ssh_runner.go:195] Run: which crictl
	I1010 18:20:48.221636  315243 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1010 18:20:48.247933  315243 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1010 18:20:48.248044  315243 ssh_runner.go:195] Run: crio --version
	I1010 18:20:48.280438  315243 ssh_runner.go:195] Run: crio --version
	I1010 18:20:48.313602  315243 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1010 18:20:43.718100  316039 out.go:252] * Restarting existing docker container for "no-preload-556024" ...
	I1010 18:20:43.718195  316039 cli_runner.go:164] Run: docker start no-preload-556024
	I1010 18:20:44.003543  316039 cli_runner.go:164] Run: docker container inspect no-preload-556024 --format={{.State.Status}}
	I1010 18:20:44.025442  316039 kic.go:430] container "no-preload-556024" state is running.
	I1010 18:20:44.025897  316039 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-556024
	I1010 18:20:44.048338  316039 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/config.json ...
	I1010 18:20:44.048652  316039 machine.go:93] provisionDockerMachine start ...
	I1010 18:20:44.048722  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:20:44.071078  316039 main.go:141] libmachine: Using SSH client type: native
	I1010 18:20:44.071356  316039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1010 18:20:44.071373  316039 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 18:20:44.071958  316039 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50844->127.0.0.1:33118: read: connection reset by peer
	I1010 18:20:47.219988  316039 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-556024
	
	I1010 18:20:47.220017  316039 ubuntu.go:182] provisioning hostname "no-preload-556024"
	I1010 18:20:47.220124  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:20:47.240083  316039 main.go:141] libmachine: Using SSH client type: native
	I1010 18:20:47.240315  316039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1010 18:20:47.240331  316039 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-556024 && echo "no-preload-556024" | sudo tee /etc/hostname
	I1010 18:20:47.384842  316039 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-556024
	
	I1010 18:20:47.384916  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:20:47.403676  316039 main.go:141] libmachine: Using SSH client type: native
	I1010 18:20:47.403883  316039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1010 18:20:47.403900  316039 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-556024' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-556024/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-556024' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:20:47.541827  316039 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:20:47.541854  316039 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-5815/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-5815/.minikube}
	I1010 18:20:47.541874  316039 ubuntu.go:190] setting up certificates
	I1010 18:20:47.541882  316039 provision.go:84] configureAuth start
	I1010 18:20:47.541927  316039 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-556024
	I1010 18:20:47.561676  316039 provision.go:143] copyHostCerts
	I1010 18:20:47.561736  316039 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem, removing ...
	I1010 18:20:47.561750  316039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem
	I1010 18:20:47.561822  316039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem (1082 bytes)
	I1010 18:20:47.561945  316039 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem, removing ...
	I1010 18:20:47.561957  316039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem
	I1010 18:20:47.561985  316039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem (1123 bytes)
	I1010 18:20:47.562088  316039 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem, removing ...
	I1010 18:20:47.562100  316039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem
	I1010 18:20:47.562130  316039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem (1675 bytes)
	I1010 18:20:47.562203  316039 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem org=jenkins.no-preload-556024 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-556024]
	I1010 18:20:47.678388  316039 provision.go:177] copyRemoteCerts
	I1010 18:20:47.678453  316039 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:20:47.678494  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:20:47.696871  316039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/no-preload-556024/id_rsa Username:docker}
	I1010 18:20:47.801868  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1010 18:20:47.826483  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1010 18:20:47.849207  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 18:20:47.872413  316039 provision.go:87] duration metric: took 330.51941ms to configureAuth
	I1010 18:20:47.872443  316039 ubuntu.go:206] setting minikube options for container-runtime
	I1010 18:20:47.872620  316039 config.go:182] Loaded profile config "no-preload-556024": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:20:47.872755  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:20:47.895966  316039 main.go:141] libmachine: Using SSH client type: native
	I1010 18:20:47.896218  316039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1010 18:20:47.896242  316039 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:20:48.278422  316039 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:20:48.278452  316039 machine.go:96] duration metric: took 4.229784935s to provisionDockerMachine
	I1010 18:20:48.278468  316039 start.go:293] postStartSetup for "no-preload-556024" (driver="docker")
	I1010 18:20:48.278483  316039 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:20:48.278552  316039 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:20:48.278614  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:20:48.299387  316039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/no-preload-556024/id_rsa Username:docker}
	I1010 18:20:48.409396  316039 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:20:48.413415  316039 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1010 18:20:48.413447  316039 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1010 18:20:48.413459  316039 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/addons for local assets ...
	I1010 18:20:48.413503  316039 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/files for local assets ...
	I1010 18:20:48.413586  316039 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem -> 93542.pem in /etc/ssl/certs
	I1010 18:20:48.413677  316039 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:20:48.423085  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:20:48.445117  316039 start.go:296] duration metric: took 166.633308ms for postStartSetup
	I1010 18:20:48.445191  316039 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1010 18:20:48.445225  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	W1010 18:20:46.032712  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	W1010 18:20:48.033716  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	I1010 18:20:48.317208  315243 cli_runner.go:164] Run: docker network inspect embed-certs-472518 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 18:20:48.336738  315243 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1010 18:20:48.344444  315243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:20:48.359751  315243 kubeadm.go:883] updating cluster {Name:embed-certs-472518 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-472518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 18:20:48.359866  315243 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:20:48.359903  315243 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:20:48.394787  315243 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:20:48.394808  315243 crio.go:433] Images already preloaded, skipping extraction
	I1010 18:20:48.394850  315243 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:20:48.422591  315243 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:20:48.422611  315243 cache_images.go:85] Images are preloaded, skipping loading
	I1010 18:20:48.422618  315243 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1010 18:20:48.422707  315243 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-472518 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-472518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:20:48.422772  315243 ssh_runner.go:195] Run: crio config
	I1010 18:20:48.471617  315243 cni.go:84] Creating CNI manager for ""
	I1010 18:20:48.471643  315243 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:20:48.471662  315243 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1010 18:20:48.471692  315243 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-472518 NodeName:embed-certs-472518 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 18:20:48.471834  315243 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-472518"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 18:20:48.471900  315243 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1010 18:20:48.482685  315243 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 18:20:48.482762  315243 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 18:20:48.492297  315243 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1010 18:20:48.507309  315243 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:20:48.521884  315243 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1010 18:20:48.537302  315243 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1010 18:20:48.541606  315243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:20:48.552248  315243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:20:48.648834  315243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:20:48.671702  315243 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518 for IP: 192.168.94.2
	I1010 18:20:48.671724  315243 certs.go:195] generating shared ca certs ...
	I1010 18:20:48.671744  315243 certs.go:227] acquiring lock for ca certs: {Name:mkd2ebf34e0d6ec3a7809bed8325fdc7fe2fcc31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:20:48.671901  315243 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key
	I1010 18:20:48.671949  315243 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key
	I1010 18:20:48.671960  315243 certs.go:257] generating profile certs ...
	I1010 18:20:48.672048  315243 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/client.key
	I1010 18:20:48.672135  315243 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/apiserver.key.37abe28c
	I1010 18:20:48.672172  315243 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/proxy-client.key
	I1010 18:20:48.672285  315243 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem (1338 bytes)
	W1010 18:20:48.672313  315243 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354_empty.pem, impossibly tiny 0 bytes
	I1010 18:20:48.672320  315243 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem (1675 bytes)
	I1010 18:20:48.672346  315243 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem (1082 bytes)
	I1010 18:20:48.672365  315243 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:20:48.672386  315243 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem (1675 bytes)
	I1010 18:20:48.672421  315243 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:20:48.673064  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:20:48.697896  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:20:48.721920  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:20:48.746177  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1010 18:20:48.773805  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1010 18:20:48.797763  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1010 18:20:48.821956  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:20:48.845335  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1010 18:20:48.866318  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /usr/share/ca-certificates/93542.pem (1708 bytes)
	I1010 18:20:48.890302  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:20:48.910153  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem --> /usr/share/ca-certificates/9354.pem (1338 bytes)
	I1010 18:20:48.932176  315243 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 18:20:48.953102  315243 ssh_runner.go:195] Run: openssl version
	I1010 18:20:48.961833  315243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9354.pem && ln -fs /usr/share/ca-certificates/9354.pem /etc/ssl/certs/9354.pem"
	I1010 18:20:48.974420  315243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9354.pem
	I1010 18:20:48.979097  315243 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 17:36 /usr/share/ca-certificates/9354.pem
	I1010 18:20:48.979165  315243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9354.pem
	I1010 18:20:49.017904  315243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9354.pem /etc/ssl/certs/51391683.0"
	I1010 18:20:49.028691  315243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93542.pem && ln -fs /usr/share/ca-certificates/93542.pem /etc/ssl/certs/93542.pem"
	I1010 18:20:49.045017  315243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93542.pem
	I1010 18:20:49.049108  315243 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 17:36 /usr/share/ca-certificates/93542.pem
	I1010 18:20:49.049166  315243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93542.pem
	I1010 18:20:49.085808  315243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93542.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:20:49.095911  315243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:20:49.105985  315243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:20:49.110274  315243 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:20:49.110329  315243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:20:49.150752  315243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:20:49.164858  315243 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:20:49.169330  315243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 18:20:49.221633  315243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 18:20:49.280769  315243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 18:20:49.360389  315243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 18:20:49.408955  315243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 18:20:49.448148  315243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 18:20:49.488852  315243 kubeadm.go:400] StartCluster: {Name:embed-certs-472518 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-472518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:20:49.488956  315243 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 18:20:49.489020  315243 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 18:20:49.528775  315243 cri.go:89] found id: "159136e63b21ef09e85b6efdc6b5a0f5be67f5af9a3516c5f8cae7be0af60846"
	I1010 18:20:49.528796  315243 cri.go:89] found id: "3622c66fa378c4b8614e23f6545ac6151fa6ef096364723cbdd5d22677bc0ca9"
	I1010 18:20:49.528802  315243 cri.go:89] found id: "a5c1be1847d40640048f86d96a7f93b4166d1688a8afd40971231c2b59f73202"
	I1010 18:20:49.528807  315243 cri.go:89] found id: "a52804abc0e7184b8ec037e1a9594b3794f50868b2f90978e95ba4f3dac34818"
	I1010 18:20:49.528811  315243 cri.go:89] found id: ""
	I1010 18:20:49.528852  315243 ssh_runner.go:195] Run: sudo runc list -f json
	W1010 18:20:49.546231  315243 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:20:49Z" level=error msg="open /run/runc: no such file or directory"
	I1010 18:20:49.546375  315243 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 18:20:49.558092  315243 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1010 18:20:49.558114  315243 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1010 18:20:49.558164  315243 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 18:20:49.575197  315243 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 18:20:49.575886  315243 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-472518" does not appear in /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:20:49.576504  315243 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-5815/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-472518" cluster setting kubeconfig missing "embed-certs-472518" context setting]
	I1010 18:20:49.577193  315243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:20:49.578945  315243 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 18:20:49.590650  315243 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1010 18:20:49.590685  315243 kubeadm.go:601] duration metric: took 32.565143ms to restartPrimaryControlPlane
	I1010 18:20:49.590695  315243 kubeadm.go:402] duration metric: took 101.853492ms to StartCluster
	I1010 18:20:49.590713  315243 settings.go:142] acquiring lock: {Name:mk32701f7c6313a55b8740f0862889585a36e8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:20:49.590778  315243 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:20:49.592554  315243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:20:49.592830  315243 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:20:49.592901  315243 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 18:20:49.593019  315243 config.go:182] Loaded profile config "embed-certs-472518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:20:49.593025  315243 addons.go:69] Setting dashboard=true in profile "embed-certs-472518"
	I1010 18:20:49.593043  315243 addons.go:69] Setting default-storageclass=true in profile "embed-certs-472518"
	I1010 18:20:49.593086  315243 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-472518"
	I1010 18:20:49.593067  315243 addons.go:238] Setting addon dashboard=true in "embed-certs-472518"
	W1010 18:20:49.593186  315243 addons.go:247] addon dashboard should already be in state true
	I1010 18:20:49.593234  315243 host.go:66] Checking if "embed-certs-472518" exists ...
	I1010 18:20:49.593029  315243 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-472518"
	I1010 18:20:49.593289  315243 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-472518"
	W1010 18:20:49.593302  315243 addons.go:247] addon storage-provisioner should already be in state true
	I1010 18:20:49.593335  315243 host.go:66] Checking if "embed-certs-472518" exists ...
	I1010 18:20:49.593410  315243 cli_runner.go:164] Run: docker container inspect embed-certs-472518 --format={{.State.Status}}
	I1010 18:20:49.593740  315243 cli_runner.go:164] Run: docker container inspect embed-certs-472518 --format={{.State.Status}}
	I1010 18:20:49.593886  315243 cli_runner.go:164] Run: docker container inspect embed-certs-472518 --format={{.State.Status}}
	I1010 18:20:49.595259  315243 out.go:179] * Verifying Kubernetes components...
	I1010 18:20:49.596615  315243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:20:49.621223  315243 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1010 18:20:49.621687  315243 addons.go:238] Setting addon default-storageclass=true in "embed-certs-472518"
	W1010 18:20:49.621713  315243 addons.go:247] addon default-storageclass should already be in state true
	I1010 18:20:49.621741  315243 host.go:66] Checking if "embed-certs-472518" exists ...
	I1010 18:20:49.622223  315243 cli_runner.go:164] Run: docker container inspect embed-certs-472518 --format={{.State.Status}}
	I1010 18:20:49.623807  315243 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:20:49.624706  315243 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1010 18:20:48.463897  316039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/no-preload-556024/id_rsa Username:docker}
	I1010 18:20:48.560880  316039 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1010 18:20:48.565518  316039 fix.go:56] duration metric: took 4.87607827s for fixHost
	I1010 18:20:48.565545  316039 start.go:83] releasing machines lock for "no-preload-556024", held for 4.876130567s
	I1010 18:20:48.565605  316039 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-556024
	I1010 18:20:48.590212  316039 ssh_runner.go:195] Run: cat /version.json
	I1010 18:20:48.590274  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:20:48.590309  316039 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:20:48.590374  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:20:48.611239  316039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/no-preload-556024/id_rsa Username:docker}
	I1010 18:20:48.611223  316039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/no-preload-556024/id_rsa Username:docker}
	I1010 18:20:48.707641  316039 ssh_runner.go:195] Run: systemctl --version
	I1010 18:20:48.779239  316039 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:20:48.822991  316039 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:20:48.827985  316039 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:20:48.828127  316039 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:20:48.838254  316039 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1010 18:20:48.838278  316039 start.go:495] detecting cgroup driver to use...
	I1010 18:20:48.838310  316039 detect.go:190] detected "systemd" cgroup driver on host os
	I1010 18:20:48.838375  316039 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:20:48.855699  316039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:20:48.870095  316039 docker.go:218] disabling cri-docker service (if available) ...
	I1010 18:20:48.870150  316039 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:20:48.889387  316039 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:20:48.903428  316039 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:20:49.004846  316039 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:20:49.095121  316039 docker.go:234] disabling docker service ...
	I1010 18:20:49.095195  316039 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:20:49.111399  316039 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:20:49.124564  316039 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:20:49.233199  316039 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:20:49.371321  316039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:20:49.391416  316039 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:20:49.410665  316039 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1010 18:20:49.410726  316039 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:49.422109  316039 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1010 18:20:49.422187  316039 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:49.434507  316039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:49.445435  316039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:49.456792  316039 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:20:49.467113  316039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:49.478960  316039 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:49.491083  316039 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:49.504692  316039 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:20:49.516727  316039 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:20:49.528623  316039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:20:49.657664  316039 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:20:49.845402  316039 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:20:49.845485  316039 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:20:49.849613  316039 start.go:563] Will wait 60s for crictl version
	I1010 18:20:49.849677  316039 ssh_runner.go:195] Run: which crictl
	I1010 18:20:49.853537  316039 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1010 18:20:49.887342  316039 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1010 18:20:49.887433  316039 ssh_runner.go:195] Run: crio --version
	I1010 18:20:49.930383  316039 ssh_runner.go:195] Run: crio --version
	I1010 18:20:49.976214  316039 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1010 18:20:47.762395  310776 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1010 18:20:47.766851  310776 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1010 18:20:47.766871  310776 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1010 18:20:47.783354  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1010 18:20:48.028048  310776 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 18:20:48.028155  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:20:48.028511  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-821769 minikube.k8s.io/updated_at=2025_10_10T18_20_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ad692bf4ab89f0e135b80e730ae25010479ecc46 minikube.k8s.io/name=default-k8s-diff-port-821769 minikube.k8s.io/primary=true
	I1010 18:20:48.041226  310776 ops.go:34] apiserver oom_adj: -16
	I1010 18:20:48.128327  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:20:48.629265  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:20:49.129256  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:20:49.631157  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:20:50.128594  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:20:50.629277  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:20:51.129277  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:20:49.977548  316039 cli_runner.go:164] Run: docker network inspect no-preload-556024 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 18:20:50.001823  316039 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1010 18:20:50.006080  316039 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:20:50.017957  316039 kubeadm.go:883] updating cluster {Name:no-preload-556024 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-556024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 18:20:50.018111  316039 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:20:50.018151  316039 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:20:50.065609  316039 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:20:50.065631  316039 cache_images.go:85] Images are preloaded, skipping loading
	I1010 18:20:50.065639  316039 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1010 18:20:50.065740  316039 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-556024 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-556024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:20:50.065812  316039 ssh_runner.go:195] Run: crio config
	I1010 18:20:50.129406  316039 cni.go:84] Creating CNI manager for ""
	I1010 18:20:50.129498  316039 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:20:50.129530  316039 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1010 18:20:50.129567  316039 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-556024 NodeName:no-preload-556024 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 18:20:50.129730  316039 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-556024"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 18:20:50.129812  316039 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1010 18:20:50.142159  316039 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 18:20:50.142246  316039 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 18:20:50.152351  316039 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1010 18:20:50.168174  316039 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:20:50.184704  316039 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1010 18:20:50.201688  316039 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1010 18:20:50.205576  316039 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:20:50.216719  316039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:20:50.314580  316039 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:20:50.339172  316039 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024 for IP: 192.168.76.2
	I1010 18:20:50.339196  316039 certs.go:195] generating shared ca certs ...
	I1010 18:20:50.339214  316039 certs.go:227] acquiring lock for ca certs: {Name:mkd2ebf34e0d6ec3a7809bed8325fdc7fe2fcc31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:20:50.339389  316039 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key
	I1010 18:20:50.339439  316039 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key
	I1010 18:20:50.339454  316039 certs.go:257] generating profile certs ...
	I1010 18:20:50.339572  316039 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/client.key
	I1010 18:20:50.339656  316039 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/apiserver.key.b1bc56db
	I1010 18:20:50.339729  316039 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/proxy-client.key
	I1010 18:20:50.339901  316039 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem (1338 bytes)
	W1010 18:20:50.339937  316039 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354_empty.pem, impossibly tiny 0 bytes
	I1010 18:20:50.339947  316039 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem (1675 bytes)
	I1010 18:20:50.339978  316039 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem (1082 bytes)
	I1010 18:20:50.340018  316039 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:20:50.340047  316039 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem (1675 bytes)
	I1010 18:20:50.340152  316039 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:20:50.341083  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:20:50.369071  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:20:50.396382  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:20:50.426223  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1010 18:20:50.462107  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1010 18:20:50.492175  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 18:20:50.515308  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:20:50.542463  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1010 18:20:50.567288  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /usr/share/ca-certificates/93542.pem (1708 bytes)
	I1010 18:20:50.593916  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:20:50.623441  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem --> /usr/share/ca-certificates/9354.pem (1338 bytes)
	I1010 18:20:50.661822  316039 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 18:20:50.685220  316039 ssh_runner.go:195] Run: openssl version
	I1010 18:20:50.694018  316039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9354.pem && ln -fs /usr/share/ca-certificates/9354.pem /etc/ssl/certs/9354.pem"
	I1010 18:20:50.707964  316039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9354.pem
	I1010 18:20:50.714772  316039 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 17:36 /usr/share/ca-certificates/9354.pem
	I1010 18:20:50.714863  316039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9354.pem
	I1010 18:20:50.775759  316039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9354.pem /etc/ssl/certs/51391683.0"
	I1010 18:20:50.789361  316039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93542.pem && ln -fs /usr/share/ca-certificates/93542.pem /etc/ssl/certs/93542.pem"
	I1010 18:20:50.802813  316039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93542.pem
	I1010 18:20:50.807903  316039 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 17:36 /usr/share/ca-certificates/93542.pem
	I1010 18:20:50.807966  316039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93542.pem
	I1010 18:20:50.865904  316039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93542.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:20:50.883902  316039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:20:50.901914  316039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:20:50.908945  316039 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:20:50.909005  316039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:20:50.970081  316039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:20:50.984254  316039 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:20:50.990832  316039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 18:20:51.055643  316039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 18:20:51.124755  316039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 18:20:51.195467  316039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 18:20:51.257855  316039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 18:20:51.321018  316039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 18:20:51.374149  316039 kubeadm.go:400] StartCluster: {Name:no-preload-556024 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-556024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:20:51.374313  316039 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 18:20:51.374389  316039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 18:20:51.434445  316039 cri.go:89] found id: "624948aa983f6a950a5a86e99ebbf4e3cec99b2849460ed697524b3fc4ffac05"
	I1010 18:20:51.434471  316039 cri.go:89] found id: "63abfddfe6fe2887c4901b8e265aae05ec3330bd42bd0d67e011b354a39c6023"
	I1010 18:20:51.434477  316039 cri.go:89] found id: "579953ecaa5c709ae190ac505c57c31de755d4d689b3be28199b4f18c038f574"
	I1010 18:20:51.434482  316039 cri.go:89] found id: "f690c75f2865bf33ee267a92d360114ddc8d677ee96e0e894aa2e4d900fd9adf"
	I1010 18:20:51.434486  316039 cri.go:89] found id: ""
	I1010 18:20:51.434533  316039 ssh_runner.go:195] Run: sudo runc list -f json
	W1010 18:20:51.460616  316039 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:20:51Z" level=error msg="open /run/runc: no such file or directory"
	I1010 18:20:51.460703  316039 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 18:20:51.481897  316039 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1010 18:20:51.481919  316039 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1010 18:20:51.481972  316039 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 18:20:51.498830  316039 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 18:20:51.500143  316039 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-556024" does not appear in /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:20:51.501037  316039 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-5815/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-556024" cluster setting kubeconfig missing "no-preload-556024" context setting]
	I1010 18:20:51.502303  316039 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:20:51.504699  316039 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 18:20:51.525552  316039 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1010 18:20:51.525663  316039 kubeadm.go:601] duration metric: took 43.737077ms to restartPrimaryControlPlane
	I1010 18:20:51.525704  316039 kubeadm.go:402] duration metric: took 151.565362ms to StartCluster
	I1010 18:20:51.525736  316039 settings.go:142] acquiring lock: {Name:mk32701f7c6313a55b8740f0862889585a36e8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:20:51.525837  316039 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:20:51.528729  316039 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:20:51.529336  316039 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:20:51.529408  316039 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 18:20:51.530224  316039 addons.go:69] Setting storage-provisioner=true in profile "no-preload-556024"
	I1010 18:20:51.530244  316039 addons.go:238] Setting addon storage-provisioner=true in "no-preload-556024"
	W1010 18:20:51.530252  316039 addons.go:247] addon storage-provisioner should already be in state true
	I1010 18:20:51.530282  316039 host.go:66] Checking if "no-preload-556024" exists ...
	I1010 18:20:51.530800  316039 cli_runner.go:164] Run: docker container inspect no-preload-556024 --format={{.State.Status}}
	I1010 18:20:51.531125  316039 addons.go:69] Setting dashboard=true in profile "no-preload-556024"
	I1010 18:20:51.531155  316039 addons.go:238] Setting addon dashboard=true in "no-preload-556024"
	I1010 18:20:51.531200  316039 addons.go:69] Setting default-storageclass=true in profile "no-preload-556024"
	W1010 18:20:51.531164  316039 addons.go:247] addon dashboard should already be in state true
	I1010 18:20:51.531221  316039 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-556024"
	I1010 18:20:51.531252  316039 host.go:66] Checking if "no-preload-556024" exists ...
	I1010 18:20:51.531518  316039 cli_runner.go:164] Run: docker container inspect no-preload-556024 --format={{.State.Status}}
	I1010 18:20:51.531721  316039 cli_runner.go:164] Run: docker container inspect no-preload-556024 --format={{.State.Status}}
	I1010 18:20:51.532678  316039 out.go:179] * Verifying Kubernetes components...
	I1010 18:20:51.529573  316039 config.go:182] Loaded profile config "no-preload-556024": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:20:51.533687  316039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:20:51.568923  316039 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:20:51.570126  316039 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:20:51.570179  316039 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 18:20:51.570260  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:20:51.572781  316039 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1010 18:20:51.573105  316039 addons.go:238] Setting addon default-storageclass=true in "no-preload-556024"
	W1010 18:20:51.573167  316039 addons.go:247] addon default-storageclass should already be in state true
	I1010 18:20:51.573209  316039 host.go:66] Checking if "no-preload-556024" exists ...
	I1010 18:20:51.573839  316039 cli_runner.go:164] Run: docker container inspect no-preload-556024 --format={{.State.Status}}
	I1010 18:20:51.574682  316039 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1010 18:20:49.625348  315243 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:20:49.625366  315243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 18:20:49.625419  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:49.625898  315243 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1010 18:20:49.625914  315243 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1010 18:20:49.625963  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:49.665940  315243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:20:49.667220  315243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:20:49.670104  315243 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 18:20:49.670128  315243 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 18:20:49.670179  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:49.701992  315243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:20:49.790496  315243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:20:49.808286  315243 node_ready.go:35] waiting up to 6m0s for node "embed-certs-472518" to be "Ready" ...
	I1010 18:20:49.877948  315243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 18:20:49.900523  315243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:20:49.904789  315243 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1010 18:20:49.904813  315243 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1010 18:20:49.926463  315243 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1010 18:20:49.926491  315243 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1010 18:20:49.948786  315243 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1010 18:20:49.948861  315243 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1010 18:20:49.970537  315243 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1010 18:20:49.970713  315243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1010 18:20:49.991031  315243 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1010 18:20:49.991096  315243 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1010 18:20:50.007758  315243 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1010 18:20:50.007779  315243 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1010 18:20:50.024836  315243 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1010 18:20:50.024870  315243 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1010 18:20:50.047286  315243 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1010 18:20:50.047312  315243 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1010 18:20:50.066137  315243 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1010 18:20:50.066162  315243 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1010 18:20:50.082085  315243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1010 18:20:51.728301  315243 node_ready.go:49] node "embed-certs-472518" is "Ready"
	I1010 18:20:51.728406  315243 node_ready.go:38] duration metric: took 1.920029979s for node "embed-certs-472518" to be "Ready" ...
	I1010 18:20:51.728515  315243 api_server.go:52] waiting for apiserver process to appear ...
	I1010 18:20:51.728588  315243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 18:20:51.628908  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:20:52.129081  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:20:52.628493  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:20:52.744888  310776 kubeadm.go:1113] duration metric: took 4.716782723s to wait for elevateKubeSystemPrivileges
	I1010 18:20:52.744920  310776 kubeadm.go:402] duration metric: took 15.95079426s to StartCluster
	I1010 18:20:52.744940  310776 settings.go:142] acquiring lock: {Name:mk32701f7c6313a55b8740f0862889585a36e8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:20:52.745008  310776 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:20:52.748330  310776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:20:52.748752  310776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1010 18:20:52.749079  310776 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 18:20:52.749218  310776 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-821769"
	I1010 18:20:52.749252  310776 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-821769"
	I1010 18:20:52.749700  310776 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:20:52.749995  310776 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-821769"
	I1010 18:20:52.751220  310776 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-821769"
	I1010 18:20:52.751275  310776 host.go:66] Checking if "default-k8s-diff-port-821769" exists ...
	I1010 18:20:52.750124  310776 config.go:182] Loaded profile config "default-k8s-diff-port-821769": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:20:52.750164  310776 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:20:52.751814  310776 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:20:52.754296  310776 out.go:179] * Verifying Kubernetes components...
	I1010 18:20:52.757073  310776 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:20:52.784878  310776 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-821769"
	I1010 18:20:52.784930  310776 host.go:66] Checking if "default-k8s-diff-port-821769" exists ...
	I1010 18:20:52.785459  310776 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:20:52.789598  310776 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:20:51.884191  315243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.006198253s)
	I1010 18:20:53.049905  315243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.14934381s)
	I1010 18:20:53.050041  315243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.967926322s)
	I1010 18:20:53.050251  315243 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.321637338s)
	I1010 18:20:53.050277  315243 api_server.go:72] duration metric: took 3.4574213s to wait for apiserver process to appear ...
	I1010 18:20:53.050285  315243 api_server.go:88] waiting for apiserver healthz status ...
	I1010 18:20:53.050312  315243 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1010 18:20:53.052034  315243 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-472518 addons enable metrics-server
	
	I1010 18:20:53.053389  315243 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1010 18:20:51.575500  316039 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1010 18:20:51.575526  316039 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1010 18:20:51.575614  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:20:51.610790  316039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/no-preload-556024/id_rsa Username:docker}
	I1010 18:20:51.615370  316039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/no-preload-556024/id_rsa Username:docker}
	I1010 18:20:51.619423  316039 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 18:20:51.619520  316039 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 18:20:51.619582  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:20:51.653223  316039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/no-preload-556024/id_rsa Username:docker}
	I1010 18:20:51.773499  316039 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:20:51.814711  316039 node_ready.go:35] waiting up to 6m0s for node "no-preload-556024" to be "Ready" ...
	I1010 18:20:51.914904  316039 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1010 18:20:51.914932  316039 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1010 18:20:51.923500  316039 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:20:51.949787  316039 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 18:20:51.968366  316039 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1010 18:20:51.968396  316039 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1010 18:20:52.039689  316039 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1010 18:20:52.039716  316039 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1010 18:20:52.098625  316039 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1010 18:20:52.098653  316039 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1010 18:20:52.167741  316039 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1010 18:20:52.167801  316039 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1010 18:20:52.219328  316039 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1010 18:20:52.219352  316039 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1010 18:20:52.265308  316039 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1010 18:20:52.265341  316039 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1010 18:20:52.313716  316039 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1010 18:20:52.313766  316039 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1010 18:20:52.352592  316039 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1010 18:20:52.352644  316039 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1010 18:20:52.387452  316039 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1010 18:20:52.790760  310776 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:20:52.790790  310776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 18:20:52.790870  310776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:20:52.821845  310776 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 18:20:52.821873  310776 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 18:20:52.821928  310776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:20:52.827947  310776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:20:52.860145  310776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:20:52.948729  310776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1010 18:20:52.991756  310776 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:20:53.123700  310776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 18:20:53.139884  310776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:20:53.325308  310776 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1010 18:20:53.330566  310776 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-821769" to be "Ready" ...
	I1010 18:20:53.592278  310776 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1010 18:20:50.034107  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	W1010 18:20:52.042686  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	I1010 18:20:54.467305  316039 node_ready.go:49] node "no-preload-556024" is "Ready"
	I1010 18:20:54.467335  316039 node_ready.go:38] duration metric: took 2.652575598s for node "no-preload-556024" to be "Ready" ...
	I1010 18:20:54.467351  316039 api_server.go:52] waiting for apiserver process to appear ...
	I1010 18:20:54.467400  316039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 18:20:55.159684  316039 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.236143034s)
	I1010 18:20:55.159770  316039 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.209938968s)
	I1010 18:20:55.159920  316039 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.772426078s)
	I1010 18:20:55.159956  316039 api_server.go:72] duration metric: took 3.630212814s to wait for apiserver process to appear ...
	I1010 18:20:55.159971  316039 api_server.go:88] waiting for apiserver healthz status ...
	I1010 18:20:55.159989  316039 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1010 18:20:55.165079  316039 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 18:20:55.165108  316039 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 18:20:55.171486  316039 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-556024 addons enable metrics-server
	
	I1010 18:20:55.172798  316039 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1010 18:20:53.593383  310776 addons.go:514] duration metric: took 844.300192ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1010 18:20:53.831818  310776 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-821769" context rescaled to 1 replicas
	W1010 18:20:55.334794  310776 node_ready.go:57] node "default-k8s-diff-port-821769" has "Ready":"False" status (will retry)
	I1010 18:20:53.054435  315243 addons.go:514] duration metric: took 3.461533728s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1010 18:20:53.058403  315243 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 18:20:53.058478  315243 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 18:20:53.551135  315243 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1010 18:20:53.557162  315243 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1010 18:20:53.558501  315243 api_server.go:141] control plane version: v1.34.1
	I1010 18:20:53.558524  315243 api_server.go:131] duration metric: took 508.226677ms to wait for apiserver health ...
	I1010 18:20:53.558535  315243 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 18:20:53.563761  315243 system_pods.go:59] 8 kube-system pods found
	I1010 18:20:53.563802  315243 system_pods.go:61] "coredns-66bc5c9577-hrcxc" [98494133-86f7-4d52-9de0-1b648c4e1eac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:20:53.563840  315243 system_pods.go:61] "etcd-embed-certs-472518" [ef258b42-940e-4df8-bda7-2abda18693ec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 18:20:53.563851  315243 system_pods.go:61] "kindnet-kpr69" [a2bc6e25-f261-43aa-b10b-35757900e93b] Running
	I1010 18:20:53.563861  315243 system_pods.go:61] "kube-apiserver-embed-certs-472518" [d3c6aec3-5dbe-4bda-a057-5ac1cacd6dc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 18:20:53.563870  315243 system_pods.go:61] "kube-controller-manager-embed-certs-472518" [35d677fb-3f5f-4b3e-8175-60234a80c67e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 18:20:53.563877  315243 system_pods.go:61] "kube-proxy-bq985" [e2d6bf76-4b03-4118-b61b-605d27646095] Running
	I1010 18:20:53.563888  315243 system_pods.go:61] "kube-scheduler-embed-certs-472518" [7ebab2fe-6192-45eb-80a1-a169ea655e6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 18:20:53.563912  315243 system_pods.go:61] "storage-provisioner" [3237266d-6c19-4af5-aef2-8d99c561d535] Running
	I1010 18:20:53.563925  315243 system_pods.go:74] duration metric: took 5.382708ms to wait for pod list to return data ...
	I1010 18:20:53.563947  315243 default_sa.go:34] waiting for default service account to be created ...
	I1010 18:20:53.566753  315243 default_sa.go:45] found service account: "default"
	I1010 18:20:53.566775  315243 default_sa.go:55] duration metric: took 2.816607ms for default service account to be created ...
	I1010 18:20:53.566784  315243 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 18:20:53.569996  315243 system_pods.go:86] 8 kube-system pods found
	I1010 18:20:53.570035  315243 system_pods.go:89] "coredns-66bc5c9577-hrcxc" [98494133-86f7-4d52-9de0-1b648c4e1eac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:20:53.570047  315243 system_pods.go:89] "etcd-embed-certs-472518" [ef258b42-940e-4df8-bda7-2abda18693ec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 18:20:53.570092  315243 system_pods.go:89] "kindnet-kpr69" [a2bc6e25-f261-43aa-b10b-35757900e93b] Running
	I1010 18:20:53.570102  315243 system_pods.go:89] "kube-apiserver-embed-certs-472518" [d3c6aec3-5dbe-4bda-a057-5ac1cacd6dc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 18:20:53.570118  315243 system_pods.go:89] "kube-controller-manager-embed-certs-472518" [35d677fb-3f5f-4b3e-8175-60234a80c67e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 18:20:53.570132  315243 system_pods.go:89] "kube-proxy-bq985" [e2d6bf76-4b03-4118-b61b-605d27646095] Running
	I1010 18:20:53.570140  315243 system_pods.go:89] "kube-scheduler-embed-certs-472518" [7ebab2fe-6192-45eb-80a1-a169ea655e6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 18:20:53.570145  315243 system_pods.go:89] "storage-provisioner" [3237266d-6c19-4af5-aef2-8d99c561d535] Running
	I1010 18:20:53.570154  315243 system_pods.go:126] duration metric: took 3.363508ms to wait for k8s-apps to be running ...
	I1010 18:20:53.570169  315243 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 18:20:53.570223  315243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:20:53.589472  315243 system_svc.go:56] duration metric: took 19.294939ms WaitForService to wait for kubelet
	I1010 18:20:53.589498  315243 kubeadm.go:586] duration metric: took 3.99664162s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:20:53.589514  315243 node_conditions.go:102] verifying NodePressure condition ...
	I1010 18:20:53.593679  315243 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1010 18:20:53.593708  315243 node_conditions.go:123] node cpu capacity is 8
	I1010 18:20:53.593724  315243 node_conditions.go:105] duration metric: took 4.204587ms to run NodePressure ...
	I1010 18:20:53.593743  315243 start.go:241] waiting for startup goroutines ...
	I1010 18:20:53.593753  315243 start.go:246] waiting for cluster config update ...
	I1010 18:20:53.593767  315243 start.go:255] writing updated cluster config ...
	I1010 18:20:53.594097  315243 ssh_runner.go:195] Run: rm -f paused
	I1010 18:20:53.599326  315243 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 18:20:53.605128  315243 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hrcxc" in "kube-system" namespace to be "Ready" or be gone ...
	W1010 18:20:55.615427  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	I1010 18:20:55.173737  316039 addons.go:514] duration metric: took 3.644339704s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1010 18:20:55.660767  316039 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1010 18:20:55.667019  316039 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 18:20:55.667122  316039 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 18:20:56.160831  316039 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1010 18:20:56.166112  316039 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1010 18:20:56.167511  316039 api_server.go:141] control plane version: v1.34.1
	I1010 18:20:56.167538  316039 api_server.go:131] duration metric: took 1.007560189s to wait for apiserver health ...
	I1010 18:20:56.167549  316039 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 18:20:56.171980  316039 system_pods.go:59] 8 kube-system pods found
	I1010 18:20:56.172028  316039 system_pods.go:61] "coredns-66bc5c9577-wpsrd" [316be091-2de7-417c-b44b-1d26108e3ed3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:20:56.172041  316039 system_pods.go:61] "etcd-no-preload-556024" [0f8f77e3-e838-4f27-9f17-2cd264198574] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 18:20:56.172063  316039 system_pods.go:61] "kindnet-wsk6h" [71384861-5289-4d2b-8d62-b7d2c27d86b8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1010 18:20:56.172073  316039 system_pods.go:61] "kube-apiserver-no-preload-556024" [7efe66ae-83bf-4ea5-a271-d8e944f74053] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 18:20:56.172083  316039 system_pods.go:61] "kube-controller-manager-no-preload-556024" [9e7fbd67-ce38-425d-b80d-b8ff3748fa70] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 18:20:56.172091  316039 system_pods.go:61] "kube-proxy-frchp" [3457ebf4-7608-4c78-b8dc-3a92a2fb32ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1010 18:20:56.172099  316039 system_pods.go:61] "kube-scheduler-no-preload-556024" [c6fb51f0-cf8d-4a56-aba5-95aff4190b44] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 18:20:56.172107  316039 system_pods.go:61] "storage-provisioner" [42a21c5e-4318-43f7-8d2a-dc62676b17c2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 18:20:56.172115  316039 system_pods.go:74] duration metric: took 4.558605ms to wait for pod list to return data ...
	I1010 18:20:56.172125  316039 default_sa.go:34] waiting for default service account to be created ...
	I1010 18:20:56.174926  316039 default_sa.go:45] found service account: "default"
	I1010 18:20:56.174945  316039 default_sa.go:55] duration metric: took 2.814097ms for default service account to be created ...
	I1010 18:20:56.174954  316039 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 18:20:56.177615  316039 system_pods.go:86] 8 kube-system pods found
	I1010 18:20:56.177644  316039 system_pods.go:89] "coredns-66bc5c9577-wpsrd" [316be091-2de7-417c-b44b-1d26108e3ed3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:20:56.177653  316039 system_pods.go:89] "etcd-no-preload-556024" [0f8f77e3-e838-4f27-9f17-2cd264198574] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 18:20:56.177664  316039 system_pods.go:89] "kindnet-wsk6h" [71384861-5289-4d2b-8d62-b7d2c27d86b8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1010 18:20:56.177673  316039 system_pods.go:89] "kube-apiserver-no-preload-556024" [7efe66ae-83bf-4ea5-a271-d8e944f74053] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 18:20:56.177683  316039 system_pods.go:89] "kube-controller-manager-no-preload-556024" [9e7fbd67-ce38-425d-b80d-b8ff3748fa70] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 18:20:56.177697  316039 system_pods.go:89] "kube-proxy-frchp" [3457ebf4-7608-4c78-b8dc-3a92a2fb32ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1010 18:20:56.177706  316039 system_pods.go:89] "kube-scheduler-no-preload-556024" [c6fb51f0-cf8d-4a56-aba5-95aff4190b44] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 18:20:56.177717  316039 system_pods.go:89] "storage-provisioner" [42a21c5e-4318-43f7-8d2a-dc62676b17c2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 18:20:56.177725  316039 system_pods.go:126] duration metric: took 2.765119ms to wait for k8s-apps to be running ...
	I1010 18:20:56.177734  316039 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 18:20:56.177779  316039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:20:56.195926  316039 system_svc.go:56] duration metric: took 18.185245ms WaitForService to wait for kubelet
	I1010 18:20:56.195953  316039 kubeadm.go:586] duration metric: took 4.666211157s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:20:56.195977  316039 node_conditions.go:102] verifying NodePressure condition ...
	I1010 18:20:56.199540  316039 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1010 18:20:56.199578  316039 node_conditions.go:123] node cpu capacity is 8
	I1010 18:20:56.199596  316039 node_conditions.go:105] duration metric: took 3.612981ms to run NodePressure ...
	I1010 18:20:56.199610  316039 start.go:241] waiting for startup goroutines ...
	I1010 18:20:56.199621  316039 start.go:246] waiting for cluster config update ...
	I1010 18:20:56.199635  316039 start.go:255] writing updated cluster config ...
	I1010 18:20:56.199914  316039 ssh_runner.go:195] Run: rm -f paused
	I1010 18:20:56.205011  316039 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 18:20:56.210819  316039 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wpsrd" in "kube-system" namespace to be "Ready" or be gone ...
	W1010 18:20:58.216565  316039 pod_ready.go:104] pod "coredns-66bc5c9577-wpsrd" is not "Ready", error: <nil>
	W1010 18:20:54.534826  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	W1010 18:20:56.537833  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	W1010 18:20:57.839422  310776 node_ready.go:57] node "default-k8s-diff-port-821769" has "Ready":"False" status (will retry)
	W1010 18:21:00.334364  310776 node_ready.go:57] node "default-k8s-diff-port-821769" has "Ready":"False" status (will retry)
	W1010 18:20:58.112627  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	W1010 18:21:00.611479  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	W1010 18:21:00.219211  316039 pod_ready.go:104] pod "coredns-66bc5c9577-wpsrd" is not "Ready", error: <nil>
	W1010 18:21:02.732835  316039 pod_ready.go:104] pod "coredns-66bc5c9577-wpsrd" is not "Ready", error: <nil>
	W1010 18:20:59.033775  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	W1010 18:21:01.532897  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	W1010 18:21:02.334772  310776 node_ready.go:57] node "default-k8s-diff-port-821769" has "Ready":"False" status (will retry)
	I1010 18:21:04.334550  310776 node_ready.go:49] node "default-k8s-diff-port-821769" is "Ready"
	I1010 18:21:04.334584  310776 node_ready.go:38] duration metric: took 11.003942186s for node "default-k8s-diff-port-821769" to be "Ready" ...
	I1010 18:21:04.334602  310776 api_server.go:52] waiting for apiserver process to appear ...
	I1010 18:21:04.334661  310776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 18:21:04.352414  310776 api_server.go:72] duration metric: took 11.600692282s to wait for apiserver process to appear ...
	I1010 18:21:04.352440  310776 api_server.go:88] waiting for apiserver healthz status ...
	I1010 18:21:04.352461  310776 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1010 18:21:04.357202  310776 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1010 18:21:04.358448  310776 api_server.go:141] control plane version: v1.34.1
	I1010 18:21:04.358475  310776 api_server.go:131] duration metric: took 6.027777ms to wait for apiserver health ...
	I1010 18:21:04.358486  310776 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 18:21:04.362525  310776 system_pods.go:59] 8 kube-system pods found
	I1010 18:21:04.362567  310776 system_pods.go:61] "coredns-66bc5c9577-wrz5v" [7a6485d8-d7c2-4cdc-a015-68b7754aa396] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:21:04.362576  310776 system_pods.go:61] "etcd-default-k8s-diff-port-821769" [b5edacc6-aaa2-4ee9-b0b1-330ce9248047] Running
	I1010 18:21:04.362584  310776 system_pods.go:61] "kindnet-4w475" [f4b100ab-44a4-49d1-bae7-d7dbdd293a80] Running
	I1010 18:21:04.362590  310776 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-821769" [d5671f82-586b-4ce8-954c-d0779d0759ae] Running
	I1010 18:21:04.362597  310776 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-821769" [04b0efc5-436e-4138-bbbc-ecb536f5118e] Running
	I1010 18:21:04.362604  310776 system_pods.go:61] "kube-proxy-h2mzf" [0598db95-c0fc-49b8-a15b-26e4f96ed49c] Running
	I1010 18:21:04.362609  310776 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-821769" [e99518f9-57ed-46f5-b338-ba281829307d] Running
	I1010 18:21:04.362621  310776 system_pods.go:61] "storage-provisioner" [63ba31a4-0bea-47b8-92f4-453fa7d83aea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 18:21:04.362634  310776 system_pods.go:74] duration metric: took 4.14166ms to wait for pod list to return data ...
	I1010 18:21:04.362650  310776 default_sa.go:34] waiting for default service account to be created ...
	I1010 18:21:04.365765  310776 default_sa.go:45] found service account: "default"
	I1010 18:21:04.365790  310776 default_sa.go:55] duration metric: took 3.13114ms for default service account to be created ...
	I1010 18:21:04.365801  310776 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 18:21:04.368917  310776 system_pods.go:86] 8 kube-system pods found
	I1010 18:21:04.368948  310776 system_pods.go:89] "coredns-66bc5c9577-wrz5v" [7a6485d8-d7c2-4cdc-a015-68b7754aa396] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:21:04.368953  310776 system_pods.go:89] "etcd-default-k8s-diff-port-821769" [b5edacc6-aaa2-4ee9-b0b1-330ce9248047] Running
	I1010 18:21:04.368962  310776 system_pods.go:89] "kindnet-4w475" [f4b100ab-44a4-49d1-bae7-d7dbdd293a80] Running
	I1010 18:21:04.368966  310776 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-821769" [d5671f82-586b-4ce8-954c-d0779d0759ae] Running
	I1010 18:21:04.368970  310776 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-821769" [04b0efc5-436e-4138-bbbc-ecb536f5118e] Running
	I1010 18:21:04.368973  310776 system_pods.go:89] "kube-proxy-h2mzf" [0598db95-c0fc-49b8-a15b-26e4f96ed49c] Running
	I1010 18:21:04.368977  310776 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-821769" [e99518f9-57ed-46f5-b338-ba281829307d] Running
	I1010 18:21:04.368982  310776 system_pods.go:89] "storage-provisioner" [63ba31a4-0bea-47b8-92f4-453fa7d83aea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 18:21:04.369018  310776 retry.go:31] will retry after 236.267744ms: missing components: kube-dns
	I1010 18:21:04.617498  310776 system_pods.go:86] 8 kube-system pods found
	I1010 18:21:04.617554  310776 system_pods.go:89] "coredns-66bc5c9577-wrz5v" [7a6485d8-d7c2-4cdc-a015-68b7754aa396] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:21:04.617563  310776 system_pods.go:89] "etcd-default-k8s-diff-port-821769" [b5edacc6-aaa2-4ee9-b0b1-330ce9248047] Running
	I1010 18:21:04.617572  310776 system_pods.go:89] "kindnet-4w475" [f4b100ab-44a4-49d1-bae7-d7dbdd293a80] Running
	I1010 18:21:04.617577  310776 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-821769" [d5671f82-586b-4ce8-954c-d0779d0759ae] Running
	I1010 18:21:04.617583  310776 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-821769" [04b0efc5-436e-4138-bbbc-ecb536f5118e] Running
	I1010 18:21:04.617588  310776 system_pods.go:89] "kube-proxy-h2mzf" [0598db95-c0fc-49b8-a15b-26e4f96ed49c] Running
	I1010 18:21:04.617593  310776 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-821769" [e99518f9-57ed-46f5-b338-ba281829307d] Running
	I1010 18:21:04.617600  310776 system_pods.go:89] "storage-provisioner" [63ba31a4-0bea-47b8-92f4-453fa7d83aea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 18:21:04.617679  310776 retry.go:31] will retry after 358.019281ms: missing components: kube-dns
	I1010 18:21:04.980610  310776 system_pods.go:86] 8 kube-system pods found
	I1010 18:21:04.980648  310776 system_pods.go:89] "coredns-66bc5c9577-wrz5v" [7a6485d8-d7c2-4cdc-a015-68b7754aa396] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:21:04.980657  310776 system_pods.go:89] "etcd-default-k8s-diff-port-821769" [b5edacc6-aaa2-4ee9-b0b1-330ce9248047] Running
	I1010 18:21:04.980665  310776 system_pods.go:89] "kindnet-4w475" [f4b100ab-44a4-49d1-bae7-d7dbdd293a80] Running
	I1010 18:21:04.980671  310776 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-821769" [d5671f82-586b-4ce8-954c-d0779d0759ae] Running
	I1010 18:21:04.980677  310776 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-821769" [04b0efc5-436e-4138-bbbc-ecb536f5118e] Running
	I1010 18:21:04.980682  310776 system_pods.go:89] "kube-proxy-h2mzf" [0598db95-c0fc-49b8-a15b-26e4f96ed49c] Running
	I1010 18:21:04.980691  310776 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-821769" [e99518f9-57ed-46f5-b338-ba281829307d] Running
	I1010 18:21:04.980698  310776 system_pods.go:89] "storage-provisioner" [63ba31a4-0bea-47b8-92f4-453fa7d83aea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 18:21:04.980718  310776 retry.go:31] will retry after 460.448201ms: missing components: kube-dns
	I1010 18:21:05.476108  310776 system_pods.go:86] 8 kube-system pods found
	I1010 18:21:05.476135  310776 system_pods.go:89] "coredns-66bc5c9577-wrz5v" [7a6485d8-d7c2-4cdc-a015-68b7754aa396] Running
	I1010 18:21:05.476141  310776 system_pods.go:89] "etcd-default-k8s-diff-port-821769" [b5edacc6-aaa2-4ee9-b0b1-330ce9248047] Running
	I1010 18:21:05.476147  310776 system_pods.go:89] "kindnet-4w475" [f4b100ab-44a4-49d1-bae7-d7dbdd293a80] Running
	I1010 18:21:05.476153  310776 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-821769" [d5671f82-586b-4ce8-954c-d0779d0759ae] Running
	I1010 18:21:05.476158  310776 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-821769" [04b0efc5-436e-4138-bbbc-ecb536f5118e] Running
	I1010 18:21:05.476164  310776 system_pods.go:89] "kube-proxy-h2mzf" [0598db95-c0fc-49b8-a15b-26e4f96ed49c] Running
	I1010 18:21:05.476169  310776 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-821769" [e99518f9-57ed-46f5-b338-ba281829307d] Running
	I1010 18:21:05.476175  310776 system_pods.go:89] "storage-provisioner" [63ba31a4-0bea-47b8-92f4-453fa7d83aea] Running
	I1010 18:21:05.476185  310776 system_pods.go:126] duration metric: took 1.110376994s to wait for k8s-apps to be running ...
	I1010 18:21:05.476203  310776 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 18:21:05.476263  310776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:21:05.491314  310776 system_svc.go:56] duration metric: took 15.10412ms WaitForService to wait for kubelet
	I1010 18:21:05.491339  310776 kubeadm.go:586] duration metric: took 12.739624944s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:21:05.491357  310776 node_conditions.go:102] verifying NodePressure condition ...
	I1010 18:21:05.494549  310776 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1010 18:21:05.494574  310776 node_conditions.go:123] node cpu capacity is 8
	I1010 18:21:05.494597  310776 node_conditions.go:105] duration metric: took 3.235725ms to run NodePressure ...
	I1010 18:21:05.494610  310776 start.go:241] waiting for startup goroutines ...
	I1010 18:21:05.494620  310776 start.go:246] waiting for cluster config update ...
	I1010 18:21:05.494635  310776 start.go:255] writing updated cluster config ...
	I1010 18:21:05.505739  310776 ssh_runner.go:195] Run: rm -f paused
	I1010 18:21:05.510435  310776 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 18:21:05.514397  310776 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wrz5v" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:05.519411  310776 pod_ready.go:94] pod "coredns-66bc5c9577-wrz5v" is "Ready"
	I1010 18:21:05.519440  310776 pod_ready.go:86] duration metric: took 5.021224ms for pod "coredns-66bc5c9577-wrz5v" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:05.521798  310776 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:05.526425  310776 pod_ready.go:94] pod "etcd-default-k8s-diff-port-821769" is "Ready"
	I1010 18:21:05.526453  310776 pod_ready.go:86] duration metric: took 4.627916ms for pod "etcd-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:05.528777  310776 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:05.533585  310776 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-821769" is "Ready"
	I1010 18:21:05.533610  310776 pod_ready.go:86] duration metric: took 4.808877ms for pod "kube-apiserver-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:05.535771  310776 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:05.915199  310776 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-821769" is "Ready"
	I1010 18:21:05.915227  310776 pod_ready.go:86] duration metric: took 379.433579ms for pod "kube-controller-manager-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:06.115325  310776 pod_ready.go:83] waiting for pod "kube-proxy-h2mzf" in "kube-system" namespace to be "Ready" or be gone ...
	W1010 18:21:02.613407  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	W1010 18:21:04.613477  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	I1010 18:21:06.515281  310776 pod_ready.go:94] pod "kube-proxy-h2mzf" is "Ready"
	I1010 18:21:06.515310  310776 pod_ready.go:86] duration metric: took 399.959779ms for pod "kube-proxy-h2mzf" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:06.716017  310776 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:07.115133  310776 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-821769" is "Ready"
	I1010 18:21:07.115162  310776 pod_ready.go:86] duration metric: took 399.114099ms for pod "kube-scheduler-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:07.115176  310776 pod_ready.go:40] duration metric: took 1.604699188s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 18:21:07.163929  310776 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1010 18:21:07.192097  310776 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-821769" cluster and "default" namespace by default
	W1010 18:21:05.217220  316039 pod_ready.go:104] pod "coredns-66bc5c9577-wpsrd" is not "Ready", error: <nil>
	W1010 18:21:07.716808  316039 pod_ready.go:104] pod "coredns-66bc5c9577-wpsrd" is not "Ready", error: <nil>
	W1010 18:21:04.032734  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	W1010 18:21:06.531020  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	W1010 18:21:08.531357  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	I1010 18:21:09.532675  309154 pod_ready.go:94] pod "coredns-5dd5756b68-qfwck" is "Ready"
	I1010 18:21:09.532706  309154 pod_ready.go:86] duration metric: took 32.006855812s for pod "coredns-5dd5756b68-qfwck" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:09.535886  309154 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:09.540776  309154 pod_ready.go:94] pod "etcd-old-k8s-version-141193" is "Ready"
	I1010 18:21:09.540797  309154 pod_ready.go:86] duration metric: took 4.887324ms for pod "etcd-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:09.543453  309154 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:09.547188  309154 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-141193" is "Ready"
	I1010 18:21:09.547212  309154 pod_ready.go:86] duration metric: took 3.738135ms for pod "kube-apiserver-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:09.549745  309154 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:09.730359  309154 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-141193" is "Ready"
	I1010 18:21:09.730391  309154 pod_ready.go:86] duration metric: took 180.622284ms for pod "kube-controller-manager-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:09.930224  309154 pod_ready.go:83] waiting for pod "kube-proxy-n9klp" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:10.329749  309154 pod_ready.go:94] pod "kube-proxy-n9klp" is "Ready"
	I1010 18:21:10.329777  309154 pod_ready.go:86] duration metric: took 399.527981ms for pod "kube-proxy-n9klp" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:10.533434  309154 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:10.930255  309154 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-141193" is "Ready"
	I1010 18:21:10.930280  309154 pod_ready.go:86] duration metric: took 396.81759ms for pod "kube-scheduler-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:10.930291  309154 pod_ready.go:40] duration metric: took 33.409574947s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 18:21:10.976268  309154 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1010 18:21:10.978153  309154 out.go:203] 
	W1010 18:21:10.979362  309154 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1010 18:21:10.980507  309154 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1010 18:21:10.981654  309154 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-141193" cluster and "default" namespace by default
	W1010 18:21:07.110875  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	W1010 18:21:09.610687  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	W1010 18:21:11.612648  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	W1010 18:21:09.717125  316039 pod_ready.go:104] pod "coredns-66bc5c9577-wpsrd" is not "Ready", error: <nil>
	W1010 18:21:12.215991  316039 pod_ready.go:104] pod "coredns-66bc5c9577-wpsrd" is not "Ready", error: <nil>
	W1010 18:21:14.111016  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	W1010 18:21:16.112160  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	W1010 18:21:14.715907  316039 pod_ready.go:104] pod "coredns-66bc5c9577-wpsrd" is not "Ready", error: <nil>
	W1010 18:21:16.717135  316039 pod_ready.go:104] pod "coredns-66bc5c9577-wpsrd" is not "Ready", error: <nil>
	W1010 18:21:18.610479  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	W1010 18:21:21.111582  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	W1010 18:21:19.216867  316039 pod_ready.go:104] pod "coredns-66bc5c9577-wpsrd" is not "Ready", error: <nil>
	W1010 18:21:21.716430  316039 pod_ready.go:104] pod "coredns-66bc5c9577-wpsrd" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 10 18:20:58 old-k8s-version-141193 crio[563]: time="2025-10-10T18:20:58.082875577Z" level=info msg="Started container" PID=1724 containerID=317b5fbf87bfe30a8ecf7698846a3261f5082a3c6dbc1013c34d952ea1f50734 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nsnjs/dashboard-metrics-scraper id=e71daf43-326b-44cb-bd8a-d3eb2c862b08 name=/runtime.v1.RuntimeService/StartContainer sandboxID=09605202217f839452fa39403de405feef1641f993649b6956e636d1bd9f8906
	Oct 10 18:20:59 old-k8s-version-141193 crio[563]: time="2025-10-10T18:20:59.036209978Z" level=info msg="Removing container: 6ebe203dda94ad2ffbefc3adcdc8edca63de95384ffb197e95d7d948c64a7df8" id=8624b80f-3691-46d6-9ee1-96808defb8e5 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 10 18:20:59 old-k8s-version-141193 crio[563]: time="2025-10-10T18:20:59.047819255Z" level=info msg="Removed container 6ebe203dda94ad2ffbefc3adcdc8edca63de95384ffb197e95d7d948c64a7df8: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nsnjs/dashboard-metrics-scraper" id=8624b80f-3691-46d6-9ee1-96808defb8e5 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 10 18:21:08 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:08.058459186Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=69827828-75f9-42b6-945c-1c687e831f11 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:21:08 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:08.059379803Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=407fc15b-6efb-40db-8f13-dabf56e6993d name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:21:08 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:08.06037917Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=cc36a961-c96e-44ff-abc5-33a69ca73fd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:21:08 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:08.060651335Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:08 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:08.064669479Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:08 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:08.064856318Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/35f4e042d921b551dca48577da31839d46eecb60eb815956dca693433218a3d0/merged/etc/passwd: no such file or directory"
	Oct 10 18:21:08 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:08.064888869Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/35f4e042d921b551dca48577da31839d46eecb60eb815956dca693433218a3d0/merged/etc/group: no such file or directory"
	Oct 10 18:21:08 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:08.065182175Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:08 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:08.097652948Z" level=info msg="Created container de2790671165db40044a25122f17100a12947ed8065bf6e2ed1ff37219e247dd: kube-system/storage-provisioner/storage-provisioner" id=cc36a961-c96e-44ff-abc5-33a69ca73fd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:21:08 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:08.09836933Z" level=info msg="Starting container: de2790671165db40044a25122f17100a12947ed8065bf6e2ed1ff37219e247dd" id=2d6f1078-d156-45b2-bc0c-ea5118de7e93 name=/runtime.v1.RuntimeService/StartContainer
	Oct 10 18:21:08 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:08.100139132Z" level=info msg="Started container" PID=1740 containerID=de2790671165db40044a25122f17100a12947ed8065bf6e2ed1ff37219e247dd description=kube-system/storage-provisioner/storage-provisioner id=2d6f1078-d156-45b2-bc0c-ea5118de7e93 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cfab681e276ea8331c4efdc53f86e44f3bf06cf39a7ee8394181e981af34fd2e
	Oct 10 18:21:14 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:14.93949691Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=fe8f9be7-f063-4fe6-b66c-2f19de11f845 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:21:14 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:14.940370725Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=100faed2-1e2b-4ce9-9362-d348e020fde3 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:21:14 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:14.941338354Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nsnjs/dashboard-metrics-scraper" id=ce0380bd-c295-4fd1-9595-bc4cca79cfdc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:21:14 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:14.941573634Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:14 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:14.948452847Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:14 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:14.949146439Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:14 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:14.972903908Z" level=info msg="Created container 1667847f042344bbbf08942c8b74a2e8385d5dfaf27c738cc310c23092d32a3d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nsnjs/dashboard-metrics-scraper" id=ce0380bd-c295-4fd1-9595-bc4cca79cfdc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:21:14 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:14.973521234Z" level=info msg="Starting container: 1667847f042344bbbf08942c8b74a2e8385d5dfaf27c738cc310c23092d32a3d" id=50a55c86-24fc-47ed-a2d2-ef28904341fe name=/runtime.v1.RuntimeService/StartContainer
	Oct 10 18:21:14 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:14.975184688Z" level=info msg="Started container" PID=1777 containerID=1667847f042344bbbf08942c8b74a2e8385d5dfaf27c738cc310c23092d32a3d description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nsnjs/dashboard-metrics-scraper id=50a55c86-24fc-47ed-a2d2-ef28904341fe name=/runtime.v1.RuntimeService/StartContainer sandboxID=09605202217f839452fa39403de405feef1641f993649b6956e636d1bd9f8906
	Oct 10 18:21:15 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:15.076925376Z" level=info msg="Removing container: 317b5fbf87bfe30a8ecf7698846a3261f5082a3c6dbc1013c34d952ea1f50734" id=3293889e-fd85-45bc-8ce0-1da6f99284ba name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 10 18:21:15 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:15.087039035Z" level=info msg="Removed container 317b5fbf87bfe30a8ecf7698846a3261f5082a3c6dbc1013c34d952ea1f50734: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nsnjs/dashboard-metrics-scraper" id=3293889e-fd85-45bc-8ce0-1da6f99284ba name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	1667847f04234       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago      Exited              dashboard-metrics-scraper   2                   09605202217f8       dashboard-metrics-scraper-5f989dc9cf-nsnjs       kubernetes-dashboard
	de2790671165d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           17 seconds ago      Running             storage-provisioner         1                   cfab681e276ea       storage-provisioner                              kube-system
	7b7c62874a1a3       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   30 seconds ago      Running             kubernetes-dashboard        0                   49a17004ec811       kubernetes-dashboard-8694d4445c-g8lm9            kubernetes-dashboard
	2d06b9d980aa2       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           48 seconds ago      Running             busybox                     1                   5e796db1aa438       busybox                                          default
	76851e857de85       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           48 seconds ago      Running             coredns                     0                   a1120ac22e98d       coredns-5dd5756b68-qfwck                         kube-system
	17dc2d6edfc14       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           48 seconds ago      Running             kindnet-cni                 0                   d86d65afaac81       kindnet-wjlh2                                    kube-system
	f0a141878e079       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           48 seconds ago      Running             kube-proxy                  0                   168360b4de985       kube-proxy-n9klp                                 kube-system
	194d18ca204ba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           48 seconds ago      Exited              storage-provisioner         0                   cfab681e276ea       storage-provisioner                              kube-system
	35c22fae38401       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           52 seconds ago      Running             kube-controller-manager     0                   97ac1788edac4       kube-controller-manager-old-k8s-version-141193   kube-system
	40a7654c69d62       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           52 seconds ago      Running             kube-scheduler              0                   c20a7e3eb2399       kube-scheduler-old-k8s-version-141193            kube-system
	fd2510c67a243       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           52 seconds ago      Running             kube-apiserver              0                   d30dd367d921e       kube-apiserver-old-k8s-version-141193            kube-system
	3757d2bd72722       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           52 seconds ago      Running             etcd                        0                   545d8a01a07c9       etcd-old-k8s-version-141193                      kube-system
	
	
	==> coredns [76851e857de85c1d61246f777900d9a4581fca45808f5b980f367404d0d69f55] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39843 - 31746 "HINFO IN 6967103515947113627.6149114998770294594. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027021535s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-141193
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-141193
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad692bf4ab89f0e135b80e730ae25010479ecc46
	                    minikube.k8s.io/name=old-k8s-version-141193
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_10T18_19_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 10 Oct 2025 18:19:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-141193
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 10 Oct 2025 18:21:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 10 Oct 2025 18:21:07 +0000   Fri, 10 Oct 2025 18:19:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 10 Oct 2025 18:21:07 +0000   Fri, 10 Oct 2025 18:19:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 10 Oct 2025 18:21:07 +0000   Fri, 10 Oct 2025 18:19:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 10 Oct 2025 18:21:07 +0000   Fri, 10 Oct 2025 18:19:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-141193
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 6694834041ede3e9eb1b67e168e90e0c
	  System UUID:                8f8bdf4a-f8cb-42ff-aa21-c2ad268c8723
	  Boot ID:                    830c8438-99e6-48ba-b543-66e651cad0c8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-5dd5756b68-qfwck                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-old-k8s-version-141193                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         119s
	  kube-system                 kindnet-wjlh2                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-old-k8s-version-141193             250m (3%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-old-k8s-version-141193    200m (2%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-n9klp                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-old-k8s-version-141193             100m (1%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-nsnjs        0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-g8lm9             0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 106s               kube-proxy       
	  Normal  Starting                 48s                kube-proxy       
	  Normal  NodeHasSufficientMemory  119s               kubelet          Node old-k8s-version-141193 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s               kubelet          Node old-k8s-version-141193 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s               kubelet          Node old-k8s-version-141193 status is now: NodeHasSufficientPID
	  Normal  Starting                 119s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s               node-controller  Node old-k8s-version-141193 event: Registered Node old-k8s-version-141193 in Controller
	  Normal  NodeReady                93s                kubelet          Node old-k8s-version-141193 status is now: NodeReady
	  Normal  Starting                 53s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 53s)  kubelet          Node old-k8s-version-141193 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 53s)  kubelet          Node old-k8s-version-141193 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x8 over 53s)  kubelet          Node old-k8s-version-141193 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           37s                node-controller  Node old-k8s-version-141193 event: Registered Node old-k8s-version-141193 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 95 0c 3e 92 2e 08 06
	[  +0.052845] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 a5 06 76 2d e3 08 06
	[ +11.354316] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff fa c6 ff 04 55 d6 08 06
	[  +7.101927] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e6 9b 73 27 8c 80 08 06
	[  +0.000350] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 a5 06 76 2d e3 08 06
	[  +6.287191] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 27 2d 28 d6 46 08 06
	[  +0.000293] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa c6 ff 04 55 d6 08 06
	[Oct10 18:19] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 8c 22 f6 6b cf 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e 29 bf 13 20 f9 08 06
	[ +15.511156] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e d6 74 aa 27 d0 08 06
	[  +0.008495] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 af 05 d4 db d1 08 06
	[Oct10 18:20] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 0b 54 33 52 4e 08 06
	[  +0.000597] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 af 05 d4 db d1 08 06
	
	
	==> etcd [3757d2bd727229dd68d4be360086d9271d28f5c098b84264b16d8e9b1794093f] <==
	{"level":"info","ts":"2025-10-10T18:20:33.499505Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-10T18:20:33.499525Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-10T18:20:33.49971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-10T18:20:33.499812Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-10T18:20:33.499937Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-10T18:20:33.500136Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-10T18:20:33.502301Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-10T18:20:33.502594Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-10T18:20:33.502657Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-10T18:20:33.50274Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-10T18:20:33.502773Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-10T18:20:34.990013Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-10T18:20:34.990071Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-10T18:20:34.990106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-10T18:20:34.990119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-10T18:20:34.990127Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-10T18:20:34.990136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-10T18:20:34.990143Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-10T18:20:34.991452Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-141193 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-10T18:20:34.991448Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-10T18:20:34.991462Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-10T18:20:34.991659Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-10T18:20:34.991685Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-10T18:20:34.992706Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-10T18:20:34.992751Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:21:25 up  1:03,  0 user,  load average: 5.55, 4.62, 2.91
	Linux old-k8s-version-141193 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [17dc2d6edfc14bbc3aad59599c1fe778e3325320e2e82a8580a705cf10bd89fe] <==
	I1010 18:20:37.441263       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1010 18:20:37.534873       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1010 18:20:37.535081       1 main.go:148] setting mtu 1500 for CNI 
	I1010 18:20:37.535100       1 main.go:178] kindnetd IP family: "ipv4"
	I1010 18:20:37.535131       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-10T18:20:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1010 18:20:37.737516       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1010 18:20:37.737930       1 controller.go:381] "Waiting for informer caches to sync"
	I1010 18:20:37.737988       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1010 18:20:37.738411       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1010 18:20:38.138087       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1010 18:20:38.138115       1 metrics.go:72] Registering metrics
	I1010 18:20:38.138166       1 controller.go:711] "Syncing nftables rules"
	I1010 18:20:47.738272       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1010 18:20:47.738333       1 main.go:301] handling current node
	I1010 18:20:57.737420       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1010 18:20:57.737471       1 main.go:301] handling current node
	I1010 18:21:07.737815       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1010 18:21:07.737844       1 main.go:301] handling current node
	I1010 18:21:17.739148       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1010 18:21:17.739197       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fd2510c67a2437bd698c9b5bc34c054b544522802f65bf2ffc6d09e1b707e52f] <==
	I1010 18:20:35.959903       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1010 18:20:36.017786       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1010 18:20:36.023801       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1010 18:20:36.060318       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1010 18:20:36.060452       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1010 18:20:36.060340       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1010 18:20:36.061092       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1010 18:20:36.060373       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1010 18:20:36.061230       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1010 18:20:36.061353       1 aggregator.go:166] initial CRD sync complete...
	I1010 18:20:36.061416       1 autoregister_controller.go:141] Starting autoregister controller
	I1010 18:20:36.061443       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1010 18:20:36.061470       1 cache.go:39] Caches are synced for autoregister controller
	I1010 18:20:36.063549       1 shared_informer.go:318] Caches are synced for configmaps
	I1010 18:20:36.849179       1 controller.go:624] quota admission added evaluator for: namespaces
	I1010 18:20:36.880411       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1010 18:20:36.896732       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1010 18:20:36.904909       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1010 18:20:36.911814       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1010 18:20:36.953683       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.227.239"}
	I1010 18:20:36.962337       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1010 18:20:36.971813       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.21.215"}
	I1010 18:20:48.638682       1 controller.go:624] quota admission added evaluator for: endpoints
	I1010 18:20:48.788319       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1010 18:20:48.837357       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [35c22fae38401c52658935667354e9d6d1ec78136964aab98a72bf3ef5eb768f] <==
	I1010 18:20:48.793603       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1010 18:20:48.795098       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1010 18:20:48.946034       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-nsnjs"
	I1010 18:20:48.948468       1 shared_informer.go:318] Caches are synced for garbage collector
	I1010 18:20:48.948563       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1010 18:20:48.948696       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="502.101993ms"
	I1010 18:20:48.948737       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-g8lm9"
	I1010 18:20:48.949655       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="255.576µs"
	I1010 18:20:48.956488       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="163.048968ms"
	I1010 18:20:48.957815       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="162.866438ms"
	I1010 18:20:48.958429       1 shared_informer.go:318] Caches are synced for garbage collector
	I1010 18:20:48.965393       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="8.846233ms"
	I1010 18:20:48.965487       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="46.956µs"
	I1010 18:20:48.968580       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="10.706915ms"
	I1010 18:20:48.968664       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.893µs"
	I1010 18:20:48.977037       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="46.789µs"
	I1010 18:20:56.043119       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="9.974116ms"
	I1010 18:20:56.043257       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="87.621µs"
	I1010 18:20:58.042069       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="59.508µs"
	I1010 18:20:59.055831       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="84.411µs"
	I1010 18:21:00.051084       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="68.962µs"
	I1010 18:21:09.531121       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.613968ms"
	I1010 18:21:09.531252       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="85.783µs"
	I1010 18:21:15.088416       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.475µs"
	I1010 18:21:19.268911       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="91.347µs"
	
	
	==> kube-proxy [f0a141878e079b9bef80d8c836ead2aaa0e5e6f6923e15d06ab08325251c3ff9] <==
	I1010 18:20:37.339698       1 server_others.go:69] "Using iptables proxy"
	I1010 18:20:37.349336       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1010 18:20:37.367706       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1010 18:20:37.369965       1 server_others.go:152] "Using iptables Proxier"
	I1010 18:20:37.370003       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1010 18:20:37.370013       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1010 18:20:37.370123       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1010 18:20:37.370390       1 server.go:846] "Version info" version="v1.28.0"
	I1010 18:20:37.370404       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 18:20:37.371112       1 config.go:188] "Starting service config controller"
	I1010 18:20:37.371480       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1010 18:20:37.371522       1 config.go:97] "Starting endpoint slice config controller"
	I1010 18:20:37.371530       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1010 18:20:37.371859       1 config.go:315] "Starting node config controller"
	I1010 18:20:37.371872       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1010 18:20:37.472545       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1010 18:20:37.472576       1 shared_informer.go:318] Caches are synced for node config
	I1010 18:20:37.472558       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [40a7654c69d62a8d95b0f35cd0690ed73e1fdcfe1ca6c15bbfe41a93f8101259] <==
	I1010 18:20:34.136381       1 serving.go:348] Generated self-signed cert in-memory
	W1010 18:20:35.972789       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1010 18:20:35.972919       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1010 18:20:35.972941       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1010 18:20:35.972970       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1010 18:20:36.009453       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1010 18:20:36.009519       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 18:20:36.014065       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1010 18:20:36.014107       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1010 18:20:36.017092       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1010 18:20:36.017198       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1010 18:20:36.115268       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 10 18:20:48 old-k8s-version-141193 kubelet[724]: I1010 18:20:48.955450     724 topology_manager.go:215] "Topology Admit Handler" podUID="b471ecc7-c8aa-40fd-bbe2-b16f4f36530f" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-g8lm9"
	Oct 10 18:20:49 old-k8s-version-141193 kubelet[724]: I1010 18:20:49.074848     724 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzpnq\" (UniqueName: \"kubernetes.io/projected/b471ecc7-c8aa-40fd-bbe2-b16f4f36530f-kube-api-access-vzpnq\") pod \"kubernetes-dashboard-8694d4445c-g8lm9\" (UID: \"b471ecc7-c8aa-40fd-bbe2-b16f4f36530f\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-g8lm9"
	Oct 10 18:20:49 old-k8s-version-141193 kubelet[724]: I1010 18:20:49.074907     724 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdsds\" (UniqueName: \"kubernetes.io/projected/91f3f2ac-4ba3-40e9-8173-386bdbdd8dae-kube-api-access-jdsds\") pod \"dashboard-metrics-scraper-5f989dc9cf-nsnjs\" (UID: \"91f3f2ac-4ba3-40e9-8173-386bdbdd8dae\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nsnjs"
	Oct 10 18:20:49 old-k8s-version-141193 kubelet[724]: I1010 18:20:49.074947     724 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b471ecc7-c8aa-40fd-bbe2-b16f4f36530f-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-g8lm9\" (UID: \"b471ecc7-c8aa-40fd-bbe2-b16f4f36530f\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-g8lm9"
	Oct 10 18:20:49 old-k8s-version-141193 kubelet[724]: I1010 18:20:49.075090     724 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/91f3f2ac-4ba3-40e9-8173-386bdbdd8dae-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-nsnjs\" (UID: \"91f3f2ac-4ba3-40e9-8173-386bdbdd8dae\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nsnjs"
	Oct 10 18:20:58 old-k8s-version-141193 kubelet[724]: I1010 18:20:58.028630     724 scope.go:117] "RemoveContainer" containerID="6ebe203dda94ad2ffbefc3adcdc8edca63de95384ffb197e95d7d948c64a7df8"
	Oct 10 18:20:58 old-k8s-version-141193 kubelet[724]: I1010 18:20:58.042111     724 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-g8lm9" podStartSLOduration=4.333954969 podCreationTimestamp="2025-10-10 18:20:48 +0000 UTC" firstStartedPulling="2025-10-10 18:20:49.305710675 +0000 UTC m=+16.467991000" lastFinishedPulling="2025-10-10 18:20:55.013794559 +0000 UTC m=+22.176074875" observedRunningTime="2025-10-10 18:20:56.033737534 +0000 UTC m=+23.196017866" watchObservedRunningTime="2025-10-10 18:20:58.042038844 +0000 UTC m=+25.204319175"
	Oct 10 18:20:59 old-k8s-version-141193 kubelet[724]: I1010 18:20:59.033762     724 scope.go:117] "RemoveContainer" containerID="6ebe203dda94ad2ffbefc3adcdc8edca63de95384ffb197e95d7d948c64a7df8"
	Oct 10 18:20:59 old-k8s-version-141193 kubelet[724]: I1010 18:20:59.034142     724 scope.go:117] "RemoveContainer" containerID="317b5fbf87bfe30a8ecf7698846a3261f5082a3c6dbc1013c34d952ea1f50734"
	Oct 10 18:20:59 old-k8s-version-141193 kubelet[724]: E1010 18:20:59.034560     724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-nsnjs_kubernetes-dashboard(91f3f2ac-4ba3-40e9-8173-386bdbdd8dae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nsnjs" podUID="91f3f2ac-4ba3-40e9-8173-386bdbdd8dae"
	Oct 10 18:21:00 old-k8s-version-141193 kubelet[724]: I1010 18:21:00.037934     724 scope.go:117] "RemoveContainer" containerID="317b5fbf87bfe30a8ecf7698846a3261f5082a3c6dbc1013c34d952ea1f50734"
	Oct 10 18:21:00 old-k8s-version-141193 kubelet[724]: E1010 18:21:00.039017     724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-nsnjs_kubernetes-dashboard(91f3f2ac-4ba3-40e9-8173-386bdbdd8dae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nsnjs" podUID="91f3f2ac-4ba3-40e9-8173-386bdbdd8dae"
	Oct 10 18:21:01 old-k8s-version-141193 kubelet[724]: I1010 18:21:01.039783     724 scope.go:117] "RemoveContainer" containerID="317b5fbf87bfe30a8ecf7698846a3261f5082a3c6dbc1013c34d952ea1f50734"
	Oct 10 18:21:01 old-k8s-version-141193 kubelet[724]: E1010 18:21:01.040216     724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-nsnjs_kubernetes-dashboard(91f3f2ac-4ba3-40e9-8173-386bdbdd8dae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nsnjs" podUID="91f3f2ac-4ba3-40e9-8173-386bdbdd8dae"
	Oct 10 18:21:08 old-k8s-version-141193 kubelet[724]: I1010 18:21:08.057898     724 scope.go:117] "RemoveContainer" containerID="194d18ca204baa8431464117f4490a32c01a38dcdc5e3a8e68285f79bd382765"
	Oct 10 18:21:14 old-k8s-version-141193 kubelet[724]: I1010 18:21:14.938848     724 scope.go:117] "RemoveContainer" containerID="317b5fbf87bfe30a8ecf7698846a3261f5082a3c6dbc1013c34d952ea1f50734"
	Oct 10 18:21:15 old-k8s-version-141193 kubelet[724]: I1010 18:21:15.075734     724 scope.go:117] "RemoveContainer" containerID="317b5fbf87bfe30a8ecf7698846a3261f5082a3c6dbc1013c34d952ea1f50734"
	Oct 10 18:21:15 old-k8s-version-141193 kubelet[724]: I1010 18:21:15.076046     724 scope.go:117] "RemoveContainer" containerID="1667847f042344bbbf08942c8b74a2e8385d5dfaf27c738cc310c23092d32a3d"
	Oct 10 18:21:15 old-k8s-version-141193 kubelet[724]: E1010 18:21:15.076454     724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-nsnjs_kubernetes-dashboard(91f3f2ac-4ba3-40e9-8173-386bdbdd8dae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nsnjs" podUID="91f3f2ac-4ba3-40e9-8173-386bdbdd8dae"
	Oct 10 18:21:19 old-k8s-version-141193 kubelet[724]: I1010 18:21:19.257336     724 scope.go:117] "RemoveContainer" containerID="1667847f042344bbbf08942c8b74a2e8385d5dfaf27c738cc310c23092d32a3d"
	Oct 10 18:21:19 old-k8s-version-141193 kubelet[724]: E1010 18:21:19.258133     724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-nsnjs_kubernetes-dashboard(91f3f2ac-4ba3-40e9-8173-386bdbdd8dae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nsnjs" podUID="91f3f2ac-4ba3-40e9-8173-386bdbdd8dae"
	Oct 10 18:21:23 old-k8s-version-141193 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 10 18:21:23 old-k8s-version-141193 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 10 18:21:23 old-k8s-version-141193 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 10 18:21:23 old-k8s-version-141193 systemd[1]: kubelet.service: Consumed 1.503s CPU time.
	
	
	==> kubernetes-dashboard [7b7c62874a1a37307babd4ba819091e951bc357eb79ac3fa62cffe33dbb78e22] <==
	2025/10/10 18:20:55 Using namespace: kubernetes-dashboard
	2025/10/10 18:20:55 Using in-cluster config to connect to apiserver
	2025/10/10 18:20:55 Using secret token for csrf signing
	2025/10/10 18:20:55 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/10 18:20:55 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/10 18:20:55 Successful initial request to the apiserver, version: v1.28.0
	2025/10/10 18:20:55 Generating JWE encryption key
	2025/10/10 18:20:55 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/10 18:20:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/10 18:20:55 Initializing JWE encryption key from synchronized object
	2025/10/10 18:20:55 Creating in-cluster Sidecar client
	2025/10/10 18:20:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/10 18:20:55 Serving insecurely on HTTP port: 9090
	2025/10/10 18:21:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/10 18:20:55 Starting overwatch
	
	
	==> storage-provisioner [194d18ca204baa8431464117f4490a32c01a38dcdc5e3a8e68285f79bd382765] <==
	I1010 18:20:37.304104       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1010 18:21:07.306571       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [de2790671165db40044a25122f17100a12947ed8065bf6e2ed1ff37219e247dd] <==
	I1010 18:21:08.112562       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1010 18:21:08.121112       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1010 18:21:08.121187       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1010 18:21:25.519721       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1010 18:21:25.519871       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-141193_cf7d7c94-4b36-4fbd-a6c2-b666a5f185e5!
	I1010 18:21:25.519919       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"783e5569-4ec9-4de4-9b38-064b377c9a54", APIVersion:"v1", ResourceVersion:"615", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-141193_cf7d7c94-4b36-4fbd-a6c2-b666a5f185e5 became leader
	I1010 18:21:25.620150       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-141193_cf7d7c94-4b36-4fbd-a6c2-b666a5f185e5!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-141193 -n old-k8s-version-141193
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-141193 -n old-k8s-version-141193: exit status 2 (313.079278ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-141193 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-141193
helpers_test.go:243: (dbg) docker inspect old-k8s-version-141193:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "00949309f427ea7a77c95f92174ed346e22a737fad21a99c854c9a40990c276c",
	        "Created": "2025-10-10T18:19:07.516278103Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 309448,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-10T18:20:24.185750468Z",
	            "FinishedAt": "2025-10-10T18:20:23.275982285Z"
	        },
	        "Image": "sha256:84da1fc78d37190122f56c520913b0bfc454516bc5fdbdc209e2a5258afce8c3",
	        "ResolvConfPath": "/var/lib/docker/containers/00949309f427ea7a77c95f92174ed346e22a737fad21a99c854c9a40990c276c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/00949309f427ea7a77c95f92174ed346e22a737fad21a99c854c9a40990c276c/hostname",
	        "HostsPath": "/var/lib/docker/containers/00949309f427ea7a77c95f92174ed346e22a737fad21a99c854c9a40990c276c/hosts",
	        "LogPath": "/var/lib/docker/containers/00949309f427ea7a77c95f92174ed346e22a737fad21a99c854c9a40990c276c/00949309f427ea7a77c95f92174ed346e22a737fad21a99c854c9a40990c276c-json.log",
	        "Name": "/old-k8s-version-141193",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-141193:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-141193",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "00949309f427ea7a77c95f92174ed346e22a737fad21a99c854c9a40990c276c",
	                "LowerDir": "/var/lib/docker/overlay2/8175d4c388af2a62328900c4de53ca564319b22d0194435beaabfec458b151c4-init/diff:/var/lib/docker/overlay2/9995a0af7efc4d83e8e62526a6cf13ffc5df3bab5cee59077c863040f7e3e58d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8175d4c388af2a62328900c4de53ca564319b22d0194435beaabfec458b151c4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8175d4c388af2a62328900c4de53ca564319b22d0194435beaabfec458b151c4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8175d4c388af2a62328900c4de53ca564319b22d0194435beaabfec458b151c4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-141193",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-141193/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-141193",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-141193",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-141193",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bdd64c654c8a73fceb2bbfc445295573b418d0eff045ad8a213a0d19c8e16534",
	            "SandboxKey": "/var/run/docker/netns/bdd64c654c8a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-141193": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:0f:0f:f5:95:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7dff4078001ce0edf8fdd80b625c94d6d211c5682186b40a040629dae3a3adf3",
	                    "EndpointID": "2345ce8c4c4e3ff80777f98944677a95dffc02178c837aae723fd948bbd999ca",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-141193",
	                        "00949309f427"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-141193 -n old-k8s-version-141193
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-141193 -n old-k8s-version-141193: exit status 2 (323.495063ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-141193 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-141193 logs -n 25: (1.123498354s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-078032 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ addons  │ enable metrics-server -p embed-certs-472518 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ ssh     │ -p bridge-078032 sudo containerd config dump                                                                                                                                                                                                  │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ ssh     │ -p bridge-078032 sudo crio config                                                                                                                                                                                                             │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ delete  │ -p bridge-078032                                                                                                                                                                                                                              │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-141193 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p old-k8s-version-141193 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:21 UTC │
	│ stop    │ -p embed-certs-472518 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ addons  │ enable metrics-server -p no-preload-556024 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ delete  │ -p disable-driver-mounts-523797                                                                                                                                                                                                               │ disable-driver-mounts-523797 │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p default-k8s-diff-port-821769 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:21 UTC │
	│ stop    │ -p no-preload-556024 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ addons  │ enable dashboard -p embed-certs-472518 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p embed-certs-472518 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-556024 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p no-preload-556024 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-821769 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-821769 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ image   │ old-k8s-version-141193 image list --format=json                                                                                                                                                                                               │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ pause   │ -p old-k8s-version-141193 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/10 18:20:43
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 18:20:43.446366  316039 out.go:360] Setting OutFile to fd 1 ...
	I1010 18:20:43.446643  316039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:20:43.446652  316039 out.go:374] Setting ErrFile to fd 2...
	I1010 18:20:43.446657  316039 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:20:43.446905  316039 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 18:20:43.447426  316039 out.go:368] Setting JSON to false
	I1010 18:20:43.448597  316039 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3783,"bootTime":1760116660,"procs":320,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 18:20:43.448694  316039 start.go:141] virtualization: kvm guest
	I1010 18:20:43.451659  316039 out.go:179] * [no-preload-556024] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1010 18:20:43.455280  316039 out.go:179]   - MINIKUBE_LOCATION=21724
	I1010 18:20:43.455310  316039 notify.go:220] Checking for updates...
	I1010 18:20:43.457194  316039 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 18:20:43.458229  316039 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:20:43.459338  316039 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-5815/.minikube
	I1010 18:20:43.460374  316039 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 18:20:43.461326  316039 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 18:20:43.462916  316039 config.go:182] Loaded profile config "no-preload-556024": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:20:43.463671  316039 driver.go:421] Setting default libvirt URI to qemu:///system
	I1010 18:20:43.494145  316039 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1010 18:20:43.494327  316039 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:20:43.575548  316039 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:79 SystemTime:2025-10-10 18:20:43.559967778 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:20:43.575688  316039 docker.go:318] overlay module found
	I1010 18:20:43.578025  316039 out.go:179] * Using the docker driver based on existing profile
	I1010 18:20:43.579242  316039 start.go:305] selected driver: docker
	I1010 18:20:43.579261  316039 start.go:925] validating driver "docker" against &{Name:no-preload-556024 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-556024 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:20:43.579415  316039 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 18:20:43.580194  316039 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:20:43.653363  316039 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:79 SystemTime:2025-10-10 18:20:43.64191346 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:20:43.653670  316039 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:20:43.653698  316039 cni.go:84] Creating CNI manager for ""
	I1010 18:20:43.653755  316039 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:20:43.653825  316039 start.go:349] cluster config:
	{Name:no-preload-556024 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-556024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:20:43.659998  316039 out.go:179] * Starting "no-preload-556024" primary control-plane node in "no-preload-556024" cluster
	I1010 18:20:43.661318  316039 cache.go:123] Beginning downloading kic base image for docker with crio
	I1010 18:20:43.662567  316039 out.go:179] * Pulling base image v0.0.48-1760103811-21724 ...
	I1010 18:20:43.663594  316039 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:20:43.663673  316039 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon
	I1010 18:20:43.663749  316039 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/config.json ...
	I1010 18:20:43.664143  316039 cache.go:107] acquiring lock: {Name:mkdface014b0b0c18e2529a8fc2cf742979f5f8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:20:43.664226  316039 cache.go:107] acquiring lock: {Name:mkd574c74807a65d6c1e08f0a6d292773ee4d51a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:20:43.664257  316039 cache.go:115] /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1010 18:20:43.664286  316039 cache.go:115] /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1010 18:20:43.664290  316039 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 149.274µs
	I1010 18:20:43.664294  316039 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 71.383µs
	I1010 18:20:43.664309  316039 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1010 18:20:43.664309  316039 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1010 18:20:43.664330  316039 cache.go:107] acquiring lock: {Name:mk6c1abc09453f5583a50c7348563cf680f08172 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:20:43.664353  316039 cache.go:107] acquiring lock: {Name:mk8a6cf34543e68ad996fdd3dfcc536ed23f13a3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:20:43.664378  316039 cache.go:115] /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1010 18:20:43.664386  316039 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 58.29µs
	I1010 18:20:43.664398  316039 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1010 18:20:43.664401  316039 cache.go:115] /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1010 18:20:43.664414  316039 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 62.339µs
	I1010 18:20:43.664412  316039 cache.go:107] acquiring lock: {Name:mk589006dd1715c9cef02bfeb051e2a5fdd82d1b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:20:43.664423  316039 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1010 18:20:43.664435  316039 cache.go:107] acquiring lock: {Name:mk346c7b9277054f446ecd193d09cac2f17a13f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:20:43.664474  316039 cache.go:115] /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1010 18:20:43.664330  316039 cache.go:107] acquiring lock: {Name:mk43600d297347b2bd1ef8f04fef87e9e24d614a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:20:43.664560  316039 cache.go:115] /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1010 18:20:43.664579  316039 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 251.287µs
	I1010 18:20:43.664587  316039 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1010 18:20:43.664447  316039 cache.go:115] /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1010 18:20:43.664606  316039 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 195.132µs
	I1010 18:20:43.664619  316039 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1010 18:20:43.664143  316039 cache.go:107] acquiring lock: {Name:mk4f454812d4444d82ff12e1c427c98a877e5e2f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:20:43.664653  316039 cache.go:115] /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1010 18:20:43.664663  316039 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 550.696µs
	I1010 18:20:43.664673  316039 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1010 18:20:43.664483  316039 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 49.234µs
	I1010 18:20:43.664681  316039 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21724-5815/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1010 18:20:43.664688  316039 cache.go:87] Successfully saved all images to host disk.
	I1010 18:20:43.689240  316039 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon, skipping pull
	I1010 18:20:43.689261  316039 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 exists in daemon, skipping load
	I1010 18:20:43.689283  316039 cache.go:232] Successfully downloaded all kic artifacts
	I1010 18:20:43.689321  316039 start.go:360] acquireMachinesLock for no-preload-556024: {Name:mk3ff552b11677088d4385d2ba43c142109fcf3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:20:43.689401  316039 start.go:364] duration metric: took 59.53µs to acquireMachinesLock for "no-preload-556024"
	I1010 18:20:43.689425  316039 start.go:96] Skipping create...Using existing machine configuration
	I1010 18:20:43.689435  316039 fix.go:54] fixHost starting: 
	I1010 18:20:43.689696  316039 cli_runner.go:164] Run: docker container inspect no-preload-556024 --format={{.State.Status}}
	I1010 18:20:43.716175  316039 fix.go:112] recreateIfNeeded on no-preload-556024: state=Stopped err=<nil>
	W1010 18:20:43.716210  316039 fix.go:138] unexpected machine state, will restart: <nil>
	W1010 18:20:39.530761  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	W1010 18:20:41.532918  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	W1010 18:20:43.534100  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	I1010 18:20:41.340446  310776 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 18:20:41.340555  310776 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 18:20:42.841252  310776 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.500922644s
	I1010 18:20:42.844237  310776 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1010 18:20:42.844348  310776 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8444/livez
	I1010 18:20:42.844433  310776 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1010 18:20:42.844518  310776 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1010 18:20:44.598226  310776 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.753880491s
	I1010 18:20:45.121438  310776 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.277113545s
	I1010 18:20:46.346293  310776 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.502033618s
	I1010 18:20:46.357281  310776 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 18:20:46.366479  310776 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 18:20:46.375532  310776 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 18:20:46.375817  310776 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-821769 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 18:20:46.384299  310776 kubeadm.go:318] [bootstrap-token] Using token: gwvnud.yj4fhfjb9apke821
	I1010 18:20:42.077576  315243 out.go:252] * Restarting existing docker container for "embed-certs-472518" ...
	I1010 18:20:42.077652  315243 cli_runner.go:164] Run: docker start embed-certs-472518
	I1010 18:20:42.324899  315243 cli_runner.go:164] Run: docker container inspect embed-certs-472518 --format={{.State.Status}}
	I1010 18:20:42.344432  315243 kic.go:430] container "embed-certs-472518" state is running.
	I1010 18:20:42.344870  315243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-472518
	I1010 18:20:42.364868  315243 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/config.json ...
	I1010 18:20:42.365194  315243 machine.go:93] provisionDockerMachine start ...
	I1010 18:20:42.365274  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:42.384498  315243 main.go:141] libmachine: Using SSH client type: native
	I1010 18:20:42.384729  315243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1010 18:20:42.384743  315243 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 18:20:42.385417  315243 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54750->127.0.0.1:33113: read: connection reset by peer
	I1010 18:20:45.520224  315243 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-472518
	
	I1010 18:20:45.520254  315243 ubuntu.go:182] provisioning hostname "embed-certs-472518"
	I1010 18:20:45.520313  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:45.539008  315243 main.go:141] libmachine: Using SSH client type: native
	I1010 18:20:45.539308  315243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1010 18:20:45.539325  315243 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-472518 && echo "embed-certs-472518" | sudo tee /etc/hostname
	I1010 18:20:45.697980  315243 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-472518
	
	I1010 18:20:45.698066  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:45.719981  315243 main.go:141] libmachine: Using SSH client type: native
	I1010 18:20:45.720234  315243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1010 18:20:45.720267  315243 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-472518' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-472518/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-472518' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:20:45.864595  315243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:20:45.864632  315243 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-5815/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-5815/.minikube}
	I1010 18:20:45.864669  315243 ubuntu.go:190] setting up certificates
	I1010 18:20:45.864681  315243 provision.go:84] configureAuth start
	I1010 18:20:45.864752  315243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-472518
	I1010 18:20:45.886254  315243 provision.go:143] copyHostCerts
	I1010 18:20:45.886322  315243 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem, removing ...
	I1010 18:20:45.886336  315243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem
	I1010 18:20:45.886413  315243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem (1675 bytes)
	I1010 18:20:45.886551  315243 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem, removing ...
	I1010 18:20:45.886565  315243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem
	I1010 18:20:45.886615  315243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem (1082 bytes)
	I1010 18:20:45.886698  315243 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem, removing ...
	I1010 18:20:45.886709  315243 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem
	I1010 18:20:45.886745  315243 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem (1123 bytes)
	I1010 18:20:45.886812  315243 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem org=jenkins.embed-certs-472518 san=[127.0.0.1 192.168.94.2 embed-certs-472518 localhost minikube]
	I1010 18:20:46.271763  315243 provision.go:177] copyRemoteCerts
	I1010 18:20:46.271823  315243 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:20:46.271855  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:46.291521  315243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:20:46.392626  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1010 18:20:46.415271  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1010 18:20:46.434707  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 18:20:46.454219  315243 provision.go:87] duration metric: took 589.52001ms to configureAuth
	I1010 18:20:46.454244  315243 ubuntu.go:206] setting minikube options for container-runtime
	I1010 18:20:46.454427  315243 config.go:182] Loaded profile config "embed-certs-472518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:20:46.454546  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:46.473500  315243 main.go:141] libmachine: Using SSH client type: native
	I1010 18:20:46.473704  315243 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1010 18:20:46.473721  315243 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:20:46.789031  315243 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:20:46.789116  315243 machine.go:96] duration metric: took 4.423902548s to provisionDockerMachine
	I1010 18:20:46.789130  315243 start.go:293] postStartSetup for "embed-certs-472518" (driver="docker")
	I1010 18:20:46.789143  315243 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:20:46.789210  315243 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:20:46.789258  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:46.815152  315243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:20:46.385437  310776 out.go:252]   - Configuring RBAC rules ...
	I1010 18:20:46.385588  310776 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 18:20:46.389824  310776 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 18:20:46.394691  310776 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 18:20:46.397355  310776 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 18:20:46.399852  310776 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 18:20:46.402418  310776 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 18:20:46.752330  310776 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 18:20:47.169598  310776 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1010 18:20:47.752782  310776 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1010 18:20:47.754001  310776 kubeadm.go:318] 
	I1010 18:20:47.754109  310776 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1010 18:20:47.754123  310776 kubeadm.go:318] 
	I1010 18:20:47.754232  310776 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1010 18:20:47.754244  310776 kubeadm.go:318] 
	I1010 18:20:47.754289  310776 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1010 18:20:47.754398  310776 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 18:20:47.754483  310776 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 18:20:47.754492  310776 kubeadm.go:318] 
	I1010 18:20:47.754572  310776 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1010 18:20:47.754589  310776 kubeadm.go:318] 
	I1010 18:20:47.754658  310776 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 18:20:47.754668  310776 kubeadm.go:318] 
	I1010 18:20:47.754745  310776 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1010 18:20:47.754863  310776 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 18:20:47.754965  310776 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 18:20:47.755000  310776 kubeadm.go:318] 
	I1010 18:20:47.755138  310776 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 18:20:47.755249  310776 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1010 18:20:47.755261  310776 kubeadm.go:318] 
	I1010 18:20:47.755379  310776 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token gwvnud.yj4fhfjb9apke821 \
	I1010 18:20:47.755581  310776 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:08dcb68c3233bd2646103f50182dc3a0cc6156f6b69cb66c341f613324bcc71f \
	I1010 18:20:47.755622  310776 kubeadm.go:318] 	--control-plane 
	I1010 18:20:47.755633  310776 kubeadm.go:318] 
	I1010 18:20:47.755764  310776 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1010 18:20:47.755779  310776 kubeadm.go:318] 
	I1010 18:20:47.755902  310776 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token gwvnud.yj4fhfjb9apke821 \
	I1010 18:20:47.756083  310776 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:08dcb68c3233bd2646103f50182dc3a0cc6156f6b69cb66c341f613324bcc71f 
	I1010 18:20:47.759459  310776 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1010 18:20:47.759612  310776 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 18:20:47.759649  310776 cni.go:84] Creating CNI manager for ""
	I1010 18:20:47.759660  310776 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:20:47.761460  310776 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1010 18:20:46.914251  315243 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:20:46.918720  315243 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1010 18:20:46.918754  315243 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1010 18:20:46.918767  315243 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/addons for local assets ...
	I1010 18:20:46.918823  315243 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/files for local assets ...
	I1010 18:20:46.918934  315243 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem -> 93542.pem in /etc/ssl/certs
	I1010 18:20:46.919076  315243 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:20:46.928469  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:20:46.951615  315243 start.go:296] duration metric: took 162.458821ms for postStartSetup
	I1010 18:20:46.951700  315243 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1010 18:20:46.951744  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:46.972432  315243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:20:47.076364  315243 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1010 18:20:47.081264  315243 fix.go:56] duration metric: took 5.026311661s for fixHost
	I1010 18:20:47.081299  315243 start.go:83] releasing machines lock for "embed-certs-472518", held for 5.026378467s
	I1010 18:20:47.081380  315243 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-472518
	I1010 18:20:47.100059  315243 ssh_runner.go:195] Run: cat /version.json
	I1010 18:20:47.100111  315243 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:20:47.100122  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:47.100174  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:47.122805  315243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:20:47.124141  315243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:20:47.227891  315243 ssh_runner.go:195] Run: systemctl --version
	I1010 18:20:47.299889  315243 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:20:47.336545  315243 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:20:47.341187  315243 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:20:47.341242  315243 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:20:47.350350  315243 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1010 18:20:47.350370  315243 start.go:495] detecting cgroup driver to use...
	I1010 18:20:47.350396  315243 detect.go:190] detected "systemd" cgroup driver on host os
	I1010 18:20:47.350445  315243 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:20:47.365413  315243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:20:47.379380  315243 docker.go:218] disabling cri-docker service (if available) ...
	I1010 18:20:47.379437  315243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:20:47.395098  315243 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:20:47.409632  315243 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:20:47.495438  315243 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:20:47.584238  315243 docker.go:234] disabling docker service ...
	I1010 18:20:47.584305  315243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:20:47.600224  315243 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:20:47.614516  315243 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:20:47.704010  315243 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:20:47.792697  315243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:20:47.808011  315243 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:20:47.826927  315243 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1010 18:20:47.826983  315243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:47.837633  315243 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1010 18:20:47.837698  315243 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:47.848119  315243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:47.859624  315243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:47.870939  315243 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:20:47.882141  315243 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:47.894494  315243 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:47.906184  315243 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:47.916671  315243 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:20:47.924923  315243 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:20:47.934175  315243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:20:48.032532  315243 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:20:48.213272  315243 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:20:48.213343  315243 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:20:48.217822  315243 start.go:563] Will wait 60s for crictl version
	I1010 18:20:48.217887  315243 ssh_runner.go:195] Run: which crictl
	I1010 18:20:48.221636  315243 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1010 18:20:48.247933  315243 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1010 18:20:48.248044  315243 ssh_runner.go:195] Run: crio --version
	I1010 18:20:48.280438  315243 ssh_runner.go:195] Run: crio --version
	I1010 18:20:48.313602  315243 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1010 18:20:43.718100  316039 out.go:252] * Restarting existing docker container for "no-preload-556024" ...
	I1010 18:20:43.718195  316039 cli_runner.go:164] Run: docker start no-preload-556024
	I1010 18:20:44.003543  316039 cli_runner.go:164] Run: docker container inspect no-preload-556024 --format={{.State.Status}}
	I1010 18:20:44.025442  316039 kic.go:430] container "no-preload-556024" state is running.
	I1010 18:20:44.025897  316039 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-556024
	I1010 18:20:44.048338  316039 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/config.json ...
	I1010 18:20:44.048652  316039 machine.go:93] provisionDockerMachine start ...
	I1010 18:20:44.048722  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:20:44.071078  316039 main.go:141] libmachine: Using SSH client type: native
	I1010 18:20:44.071356  316039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1010 18:20:44.071373  316039 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 18:20:44.071958  316039 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50844->127.0.0.1:33118: read: connection reset by peer
	I1010 18:20:47.219988  316039 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-556024
	
	I1010 18:20:47.220017  316039 ubuntu.go:182] provisioning hostname "no-preload-556024"
	I1010 18:20:47.220124  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:20:47.240083  316039 main.go:141] libmachine: Using SSH client type: native
	I1010 18:20:47.240315  316039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1010 18:20:47.240331  316039 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-556024 && echo "no-preload-556024" | sudo tee /etc/hostname
	I1010 18:20:47.384842  316039 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-556024
	
	I1010 18:20:47.384916  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:20:47.403676  316039 main.go:141] libmachine: Using SSH client type: native
	I1010 18:20:47.403883  316039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1010 18:20:47.403900  316039 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-556024' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-556024/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-556024' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:20:47.541827  316039 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:20:47.541854  316039 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-5815/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-5815/.minikube}
	I1010 18:20:47.541874  316039 ubuntu.go:190] setting up certificates
	I1010 18:20:47.541882  316039 provision.go:84] configureAuth start
	I1010 18:20:47.541927  316039 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-556024
	I1010 18:20:47.561676  316039 provision.go:143] copyHostCerts
	I1010 18:20:47.561736  316039 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem, removing ...
	I1010 18:20:47.561750  316039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem
	I1010 18:20:47.561822  316039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem (1082 bytes)
	I1010 18:20:47.561945  316039 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem, removing ...
	I1010 18:20:47.561957  316039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem
	I1010 18:20:47.561985  316039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem (1123 bytes)
	I1010 18:20:47.562088  316039 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem, removing ...
	I1010 18:20:47.562100  316039 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem
	I1010 18:20:47.562130  316039 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem (1675 bytes)
	I1010 18:20:47.562203  316039 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem org=jenkins.no-preload-556024 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-556024]
	I1010 18:20:47.678388  316039 provision.go:177] copyRemoteCerts
	I1010 18:20:47.678453  316039 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:20:47.678494  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:20:47.696871  316039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/no-preload-556024/id_rsa Username:docker}
	I1010 18:20:47.801868  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1010 18:20:47.826483  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1010 18:20:47.849207  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 18:20:47.872413  316039 provision.go:87] duration metric: took 330.51941ms to configureAuth
	I1010 18:20:47.872443  316039 ubuntu.go:206] setting minikube options for container-runtime
	I1010 18:20:47.872620  316039 config.go:182] Loaded profile config "no-preload-556024": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:20:47.872755  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:20:47.895966  316039 main.go:141] libmachine: Using SSH client type: native
	I1010 18:20:47.896218  316039 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1010 18:20:47.896242  316039 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:20:48.278422  316039 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:20:48.278452  316039 machine.go:96] duration metric: took 4.229784935s to provisionDockerMachine
	I1010 18:20:48.278468  316039 start.go:293] postStartSetup for "no-preload-556024" (driver="docker")
	I1010 18:20:48.278483  316039 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:20:48.278552  316039 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:20:48.278614  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:20:48.299387  316039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/no-preload-556024/id_rsa Username:docker}
	I1010 18:20:48.409396  316039 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:20:48.413415  316039 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1010 18:20:48.413447  316039 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1010 18:20:48.413459  316039 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/addons for local assets ...
	I1010 18:20:48.413503  316039 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/files for local assets ...
	I1010 18:20:48.413586  316039 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem -> 93542.pem in /etc/ssl/certs
	I1010 18:20:48.413677  316039 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:20:48.423085  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:20:48.445117  316039 start.go:296] duration metric: took 166.633308ms for postStartSetup
	I1010 18:20:48.445191  316039 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1010 18:20:48.445225  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	W1010 18:20:46.032712  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	W1010 18:20:48.033716  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	I1010 18:20:48.317208  315243 cli_runner.go:164] Run: docker network inspect embed-certs-472518 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 18:20:48.336738  315243 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1010 18:20:48.344444  315243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:20:48.359751  315243 kubeadm.go:883] updating cluster {Name:embed-certs-472518 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-472518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 18:20:48.359866  315243 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:20:48.359903  315243 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:20:48.394787  315243 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:20:48.394808  315243 crio.go:433] Images already preloaded, skipping extraction
	I1010 18:20:48.394850  315243 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:20:48.422591  315243 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:20:48.422611  315243 cache_images.go:85] Images are preloaded, skipping loading
	I1010 18:20:48.422618  315243 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1010 18:20:48.422707  315243 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-472518 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-472518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:20:48.422772  315243 ssh_runner.go:195] Run: crio config
	I1010 18:20:48.471617  315243 cni.go:84] Creating CNI manager for ""
	I1010 18:20:48.471643  315243 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:20:48.471662  315243 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1010 18:20:48.471692  315243 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-472518 NodeName:embed-certs-472518 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 18:20:48.471834  315243 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-472518"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 18:20:48.471900  315243 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1010 18:20:48.482685  315243 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 18:20:48.482762  315243 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 18:20:48.492297  315243 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1010 18:20:48.507309  315243 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:20:48.521884  315243 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1010 18:20:48.537302  315243 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1010 18:20:48.541606  315243 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:20:48.552248  315243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:20:48.648834  315243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:20:48.671702  315243 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518 for IP: 192.168.94.2
	I1010 18:20:48.671724  315243 certs.go:195] generating shared ca certs ...
	I1010 18:20:48.671744  315243 certs.go:227] acquiring lock for ca certs: {Name:mkd2ebf34e0d6ec3a7809bed8325fdc7fe2fcc31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:20:48.671901  315243 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key
	I1010 18:20:48.671949  315243 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key
	I1010 18:20:48.671960  315243 certs.go:257] generating profile certs ...
	I1010 18:20:48.672048  315243 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/client.key
	I1010 18:20:48.672135  315243 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/apiserver.key.37abe28c
	I1010 18:20:48.672172  315243 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/proxy-client.key
	I1010 18:20:48.672285  315243 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem (1338 bytes)
	W1010 18:20:48.672313  315243 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354_empty.pem, impossibly tiny 0 bytes
	I1010 18:20:48.672320  315243 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem (1675 bytes)
	I1010 18:20:48.672346  315243 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem (1082 bytes)
	I1010 18:20:48.672365  315243 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:20:48.672386  315243 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem (1675 bytes)
	I1010 18:20:48.672421  315243 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:20:48.673064  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:20:48.697896  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:20:48.721920  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:20:48.746177  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1010 18:20:48.773805  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1010 18:20:48.797763  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1010 18:20:48.821956  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:20:48.845335  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/embed-certs-472518/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1010 18:20:48.866318  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /usr/share/ca-certificates/93542.pem (1708 bytes)
	I1010 18:20:48.890302  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:20:48.910153  315243 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem --> /usr/share/ca-certificates/9354.pem (1338 bytes)
	I1010 18:20:48.932176  315243 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 18:20:48.953102  315243 ssh_runner.go:195] Run: openssl version
	I1010 18:20:48.961833  315243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9354.pem && ln -fs /usr/share/ca-certificates/9354.pem /etc/ssl/certs/9354.pem"
	I1010 18:20:48.974420  315243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9354.pem
	I1010 18:20:48.979097  315243 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 17:36 /usr/share/ca-certificates/9354.pem
	I1010 18:20:48.979165  315243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9354.pem
	I1010 18:20:49.017904  315243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9354.pem /etc/ssl/certs/51391683.0"
	I1010 18:20:49.028691  315243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93542.pem && ln -fs /usr/share/ca-certificates/93542.pem /etc/ssl/certs/93542.pem"
	I1010 18:20:49.045017  315243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93542.pem
	I1010 18:20:49.049108  315243 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 17:36 /usr/share/ca-certificates/93542.pem
	I1010 18:20:49.049166  315243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93542.pem
	I1010 18:20:49.085808  315243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93542.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:20:49.095911  315243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:20:49.105985  315243 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:20:49.110274  315243 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:20:49.110329  315243 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:20:49.150752  315243 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:20:49.164858  315243 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:20:49.169330  315243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 18:20:49.221633  315243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 18:20:49.280769  315243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 18:20:49.360389  315243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 18:20:49.408955  315243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 18:20:49.448148  315243 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 18:20:49.488852  315243 kubeadm.go:400] StartCluster: {Name:embed-certs-472518 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-472518 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:20:49.488956  315243 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 18:20:49.489020  315243 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 18:20:49.528775  315243 cri.go:89] found id: "159136e63b21ef09e85b6efdc6b5a0f5be67f5af9a3516c5f8cae7be0af60846"
	I1010 18:20:49.528796  315243 cri.go:89] found id: "3622c66fa378c4b8614e23f6545ac6151fa6ef096364723cbdd5d22677bc0ca9"
	I1010 18:20:49.528802  315243 cri.go:89] found id: "a5c1be1847d40640048f86d96a7f93b4166d1688a8afd40971231c2b59f73202"
	I1010 18:20:49.528807  315243 cri.go:89] found id: "a52804abc0e7184b8ec037e1a9594b3794f50868b2f90978e95ba4f3dac34818"
	I1010 18:20:49.528811  315243 cri.go:89] found id: ""
	I1010 18:20:49.528852  315243 ssh_runner.go:195] Run: sudo runc list -f json
	W1010 18:20:49.546231  315243 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:20:49Z" level=error msg="open /run/runc: no such file or directory"
	I1010 18:20:49.546375  315243 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 18:20:49.558092  315243 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1010 18:20:49.558114  315243 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1010 18:20:49.558164  315243 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 18:20:49.575197  315243 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 18:20:49.575886  315243 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-472518" does not appear in /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:20:49.576504  315243 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-5815/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-472518" cluster setting kubeconfig missing "embed-certs-472518" context setting]
	I1010 18:20:49.577193  315243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:20:49.578945  315243 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 18:20:49.590650  315243 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1010 18:20:49.590685  315243 kubeadm.go:601] duration metric: took 32.565143ms to restartPrimaryControlPlane
	I1010 18:20:49.590695  315243 kubeadm.go:402] duration metric: took 101.853492ms to StartCluster
	I1010 18:20:49.590713  315243 settings.go:142] acquiring lock: {Name:mk32701f7c6313a55b8740f0862889585a36e8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:20:49.590778  315243 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:20:49.592554  315243 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:20:49.592830  315243 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:20:49.592901  315243 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 18:20:49.593019  315243 config.go:182] Loaded profile config "embed-certs-472518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:20:49.593025  315243 addons.go:69] Setting dashboard=true in profile "embed-certs-472518"
	I1010 18:20:49.593043  315243 addons.go:69] Setting default-storageclass=true in profile "embed-certs-472518"
	I1010 18:20:49.593086  315243 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-472518"
	I1010 18:20:49.593067  315243 addons.go:238] Setting addon dashboard=true in "embed-certs-472518"
	W1010 18:20:49.593186  315243 addons.go:247] addon dashboard should already be in state true
	I1010 18:20:49.593234  315243 host.go:66] Checking if "embed-certs-472518" exists ...
	I1010 18:20:49.593029  315243 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-472518"
	I1010 18:20:49.593289  315243 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-472518"
	W1010 18:20:49.593302  315243 addons.go:247] addon storage-provisioner should already be in state true
	I1010 18:20:49.593335  315243 host.go:66] Checking if "embed-certs-472518" exists ...
	I1010 18:20:49.593410  315243 cli_runner.go:164] Run: docker container inspect embed-certs-472518 --format={{.State.Status}}
	I1010 18:20:49.593740  315243 cli_runner.go:164] Run: docker container inspect embed-certs-472518 --format={{.State.Status}}
	I1010 18:20:49.593886  315243 cli_runner.go:164] Run: docker container inspect embed-certs-472518 --format={{.State.Status}}
	I1010 18:20:49.595259  315243 out.go:179] * Verifying Kubernetes components...
	I1010 18:20:49.596615  315243 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:20:49.621223  315243 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1010 18:20:49.621687  315243 addons.go:238] Setting addon default-storageclass=true in "embed-certs-472518"
	W1010 18:20:49.621713  315243 addons.go:247] addon default-storageclass should already be in state true
	I1010 18:20:49.621741  315243 host.go:66] Checking if "embed-certs-472518" exists ...
	I1010 18:20:49.622223  315243 cli_runner.go:164] Run: docker container inspect embed-certs-472518 --format={{.State.Status}}
	I1010 18:20:49.623807  315243 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:20:49.624706  315243 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1010 18:20:48.463897  316039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/no-preload-556024/id_rsa Username:docker}
	I1010 18:20:48.560880  316039 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1010 18:20:48.565518  316039 fix.go:56] duration metric: took 4.87607827s for fixHost
	I1010 18:20:48.565545  316039 start.go:83] releasing machines lock for "no-preload-556024", held for 4.876130567s
	I1010 18:20:48.565605  316039 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-556024
	I1010 18:20:48.590212  316039 ssh_runner.go:195] Run: cat /version.json
	I1010 18:20:48.590274  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:20:48.590309  316039 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:20:48.590374  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:20:48.611239  316039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/no-preload-556024/id_rsa Username:docker}
	I1010 18:20:48.611223  316039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/no-preload-556024/id_rsa Username:docker}
	I1010 18:20:48.707641  316039 ssh_runner.go:195] Run: systemctl --version
	I1010 18:20:48.779239  316039 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:20:48.822991  316039 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:20:48.827985  316039 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:20:48.828127  316039 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:20:48.838254  316039 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1010 18:20:48.838278  316039 start.go:495] detecting cgroup driver to use...
	I1010 18:20:48.838310  316039 detect.go:190] detected "systemd" cgroup driver on host os
	I1010 18:20:48.838375  316039 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:20:48.855699  316039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:20:48.870095  316039 docker.go:218] disabling cri-docker service (if available) ...
	I1010 18:20:48.870150  316039 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:20:48.889387  316039 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:20:48.903428  316039 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:20:49.004846  316039 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:20:49.095121  316039 docker.go:234] disabling docker service ...
	I1010 18:20:49.095195  316039 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:20:49.111399  316039 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:20:49.124564  316039 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:20:49.233199  316039 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:20:49.371321  316039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:20:49.391416  316039 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:20:49.410665  316039 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1010 18:20:49.410726  316039 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:49.422109  316039 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1010 18:20:49.422187  316039 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:49.434507  316039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:49.445435  316039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:49.456792  316039 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:20:49.467113  316039 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:49.478960  316039 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:49.491083  316039 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:20:49.504692  316039 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:20:49.516727  316039 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:20:49.528623  316039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:20:49.657664  316039 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:20:49.845402  316039 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:20:49.845485  316039 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:20:49.849613  316039 start.go:563] Will wait 60s for crictl version
	I1010 18:20:49.849677  316039 ssh_runner.go:195] Run: which crictl
	I1010 18:20:49.853537  316039 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1010 18:20:49.887342  316039 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1010 18:20:49.887433  316039 ssh_runner.go:195] Run: crio --version
	I1010 18:20:49.930383  316039 ssh_runner.go:195] Run: crio --version
	I1010 18:20:49.976214  316039 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1010 18:20:47.762395  310776 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1010 18:20:47.766851  310776 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1010 18:20:47.766871  310776 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1010 18:20:47.783354  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1010 18:20:48.028048  310776 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 18:20:48.028155  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:20:48.028511  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-821769 minikube.k8s.io/updated_at=2025_10_10T18_20_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ad692bf4ab89f0e135b80e730ae25010479ecc46 minikube.k8s.io/name=default-k8s-diff-port-821769 minikube.k8s.io/primary=true
	I1010 18:20:48.041226  310776 ops.go:34] apiserver oom_adj: -16
	I1010 18:20:48.128327  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:20:48.629265  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:20:49.129256  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:20:49.631157  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:20:50.128594  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:20:50.629277  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:20:51.129277  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:20:49.977548  316039 cli_runner.go:164] Run: docker network inspect no-preload-556024 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 18:20:50.001823  316039 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1010 18:20:50.006080  316039 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:20:50.017957  316039 kubeadm.go:883] updating cluster {Name:no-preload-556024 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-556024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 18:20:50.018111  316039 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:20:50.018151  316039 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:20:50.065609  316039 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:20:50.065631  316039 cache_images.go:85] Images are preloaded, skipping loading
	I1010 18:20:50.065639  316039 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1010 18:20:50.065740  316039 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-556024 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-556024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:20:50.065812  316039 ssh_runner.go:195] Run: crio config
	I1010 18:20:50.129406  316039 cni.go:84] Creating CNI manager for ""
	I1010 18:20:50.129498  316039 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:20:50.129530  316039 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1010 18:20:50.129567  316039 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-556024 NodeName:no-preload-556024 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 18:20:50.129730  316039 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-556024"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 18:20:50.129812  316039 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1010 18:20:50.142159  316039 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 18:20:50.142246  316039 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 18:20:50.152351  316039 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1010 18:20:50.168174  316039 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:20:50.184704  316039 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1010 18:20:50.201688  316039 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1010 18:20:50.205576  316039 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:20:50.216719  316039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:20:50.314580  316039 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:20:50.339172  316039 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024 for IP: 192.168.76.2
	I1010 18:20:50.339196  316039 certs.go:195] generating shared ca certs ...
	I1010 18:20:50.339214  316039 certs.go:227] acquiring lock for ca certs: {Name:mkd2ebf34e0d6ec3a7809bed8325fdc7fe2fcc31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:20:50.339389  316039 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key
	I1010 18:20:50.339439  316039 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key
	I1010 18:20:50.339454  316039 certs.go:257] generating profile certs ...
	I1010 18:20:50.339572  316039 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/client.key
	I1010 18:20:50.339656  316039 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/apiserver.key.b1bc56db
	I1010 18:20:50.339729  316039 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/proxy-client.key
	I1010 18:20:50.339901  316039 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem (1338 bytes)
	W1010 18:20:50.339937  316039 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354_empty.pem, impossibly tiny 0 bytes
	I1010 18:20:50.339947  316039 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem (1675 bytes)
	I1010 18:20:50.339978  316039 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem (1082 bytes)
	I1010 18:20:50.340018  316039 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:20:50.340047  316039 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem (1675 bytes)
	I1010 18:20:50.340152  316039 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:20:50.341083  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:20:50.369071  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:20:50.396382  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:20:50.426223  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1010 18:20:50.462107  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1010 18:20:50.492175  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 18:20:50.515308  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:20:50.542463  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/no-preload-556024/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1010 18:20:50.567288  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /usr/share/ca-certificates/93542.pem (1708 bytes)
	I1010 18:20:50.593916  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:20:50.623441  316039 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem --> /usr/share/ca-certificates/9354.pem (1338 bytes)
	I1010 18:20:50.661822  316039 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 18:20:50.685220  316039 ssh_runner.go:195] Run: openssl version
	I1010 18:20:50.694018  316039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9354.pem && ln -fs /usr/share/ca-certificates/9354.pem /etc/ssl/certs/9354.pem"
	I1010 18:20:50.707964  316039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9354.pem
	I1010 18:20:50.714772  316039 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 17:36 /usr/share/ca-certificates/9354.pem
	I1010 18:20:50.714863  316039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9354.pem
	I1010 18:20:50.775759  316039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9354.pem /etc/ssl/certs/51391683.0"
	I1010 18:20:50.789361  316039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93542.pem && ln -fs /usr/share/ca-certificates/93542.pem /etc/ssl/certs/93542.pem"
	I1010 18:20:50.802813  316039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93542.pem
	I1010 18:20:50.807903  316039 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 17:36 /usr/share/ca-certificates/93542.pem
	I1010 18:20:50.807966  316039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93542.pem
	I1010 18:20:50.865904  316039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93542.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:20:50.883902  316039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:20:50.901914  316039 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:20:50.908945  316039 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:20:50.909005  316039 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:20:50.970081  316039 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:20:50.984254  316039 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:20:50.990832  316039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 18:20:51.055643  316039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 18:20:51.124755  316039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 18:20:51.195467  316039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 18:20:51.257855  316039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 18:20:51.321018  316039 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 18:20:51.374149  316039 kubeadm.go:400] StartCluster: {Name:no-preload-556024 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-556024 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:20:51.374313  316039 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 18:20:51.374389  316039 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 18:20:51.434445  316039 cri.go:89] found id: "624948aa983f6a950a5a86e99ebbf4e3cec99b2849460ed697524b3fc4ffac05"
	I1010 18:20:51.434471  316039 cri.go:89] found id: "63abfddfe6fe2887c4901b8e265aae05ec3330bd42bd0d67e011b354a39c6023"
	I1010 18:20:51.434477  316039 cri.go:89] found id: "579953ecaa5c709ae190ac505c57c31de755d4d689b3be28199b4f18c038f574"
	I1010 18:20:51.434482  316039 cri.go:89] found id: "f690c75f2865bf33ee267a92d360114ddc8d677ee96e0e894aa2e4d900fd9adf"
	I1010 18:20:51.434486  316039 cri.go:89] found id: ""
	I1010 18:20:51.434533  316039 ssh_runner.go:195] Run: sudo runc list -f json
	W1010 18:20:51.460616  316039 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:20:51Z" level=error msg="open /run/runc: no such file or directory"
	I1010 18:20:51.460703  316039 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 18:20:51.481897  316039 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1010 18:20:51.481919  316039 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1010 18:20:51.481972  316039 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 18:20:51.498830  316039 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 18:20:51.500143  316039 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-556024" does not appear in /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:20:51.501037  316039 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-5815/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-556024" cluster setting kubeconfig missing "no-preload-556024" context setting]
	I1010 18:20:51.502303  316039 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:20:51.504699  316039 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 18:20:51.525552  316039 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1010 18:20:51.525663  316039 kubeadm.go:601] duration metric: took 43.737077ms to restartPrimaryControlPlane
	I1010 18:20:51.525704  316039 kubeadm.go:402] duration metric: took 151.565362ms to StartCluster
	I1010 18:20:51.525736  316039 settings.go:142] acquiring lock: {Name:mk32701f7c6313a55b8740f0862889585a36e8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:20:51.525837  316039 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:20:51.528729  316039 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:20:51.529336  316039 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:20:51.529408  316039 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 18:20:51.530224  316039 addons.go:69] Setting storage-provisioner=true in profile "no-preload-556024"
	I1010 18:20:51.530244  316039 addons.go:238] Setting addon storage-provisioner=true in "no-preload-556024"
	W1010 18:20:51.530252  316039 addons.go:247] addon storage-provisioner should already be in state true
	I1010 18:20:51.530282  316039 host.go:66] Checking if "no-preload-556024" exists ...
	I1010 18:20:51.530800  316039 cli_runner.go:164] Run: docker container inspect no-preload-556024 --format={{.State.Status}}
	I1010 18:20:51.531125  316039 addons.go:69] Setting dashboard=true in profile "no-preload-556024"
	I1010 18:20:51.531155  316039 addons.go:238] Setting addon dashboard=true in "no-preload-556024"
	I1010 18:20:51.531200  316039 addons.go:69] Setting default-storageclass=true in profile "no-preload-556024"
	W1010 18:20:51.531164  316039 addons.go:247] addon dashboard should already be in state true
	I1010 18:20:51.531221  316039 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-556024"
	I1010 18:20:51.531252  316039 host.go:66] Checking if "no-preload-556024" exists ...
	I1010 18:20:51.531518  316039 cli_runner.go:164] Run: docker container inspect no-preload-556024 --format={{.State.Status}}
	I1010 18:20:51.531721  316039 cli_runner.go:164] Run: docker container inspect no-preload-556024 --format={{.State.Status}}
	I1010 18:20:51.532678  316039 out.go:179] * Verifying Kubernetes components...
	I1010 18:20:51.529573  316039 config.go:182] Loaded profile config "no-preload-556024": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:20:51.533687  316039 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:20:51.568923  316039 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:20:51.570126  316039 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:20:51.570179  316039 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 18:20:51.570260  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:20:51.572781  316039 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1010 18:20:51.573105  316039 addons.go:238] Setting addon default-storageclass=true in "no-preload-556024"
	W1010 18:20:51.573167  316039 addons.go:247] addon default-storageclass should already be in state true
	I1010 18:20:51.573209  316039 host.go:66] Checking if "no-preload-556024" exists ...
	I1010 18:20:51.573839  316039 cli_runner.go:164] Run: docker container inspect no-preload-556024 --format={{.State.Status}}
	I1010 18:20:51.574682  316039 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1010 18:20:49.625348  315243 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:20:49.625366  315243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 18:20:49.625419  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:49.625898  315243 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1010 18:20:49.625914  315243 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1010 18:20:49.625963  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:49.665940  315243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:20:49.667220  315243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:20:49.670104  315243 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 18:20:49.670128  315243 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 18:20:49.670179  315243 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:20:49.701992  315243 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:20:49.790496  315243 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:20:49.808286  315243 node_ready.go:35] waiting up to 6m0s for node "embed-certs-472518" to be "Ready" ...
	I1010 18:20:49.877948  315243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 18:20:49.900523  315243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:20:49.904789  315243 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1010 18:20:49.904813  315243 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1010 18:20:49.926463  315243 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1010 18:20:49.926491  315243 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1010 18:20:49.948786  315243 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1010 18:20:49.948861  315243 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1010 18:20:49.970537  315243 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1010 18:20:49.970713  315243 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1010 18:20:49.991031  315243 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1010 18:20:49.991096  315243 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1010 18:20:50.007758  315243 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1010 18:20:50.007779  315243 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1010 18:20:50.024836  315243 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1010 18:20:50.024870  315243 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1010 18:20:50.047286  315243 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1010 18:20:50.047312  315243 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1010 18:20:50.066137  315243 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1010 18:20:50.066162  315243 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1010 18:20:50.082085  315243 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1010 18:20:51.728301  315243 node_ready.go:49] node "embed-certs-472518" is "Ready"
	I1010 18:20:51.728406  315243 node_ready.go:38] duration metric: took 1.920029979s for node "embed-certs-472518" to be "Ready" ...
	I1010 18:20:51.728515  315243 api_server.go:52] waiting for apiserver process to appear ...
	I1010 18:20:51.728588  315243 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 18:20:51.628908  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:20:52.129081  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:20:52.628493  310776 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:20:52.744888  310776 kubeadm.go:1113] duration metric: took 4.716782723s to wait for elevateKubeSystemPrivileges
	I1010 18:20:52.744920  310776 kubeadm.go:402] duration metric: took 15.95079426s to StartCluster
	I1010 18:20:52.744940  310776 settings.go:142] acquiring lock: {Name:mk32701f7c6313a55b8740f0862889585a36e8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:20:52.745008  310776 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:20:52.748330  310776 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:20:52.748752  310776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1010 18:20:52.749079  310776 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 18:20:52.749218  310776 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-821769"
	I1010 18:20:52.749252  310776 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-821769"
	I1010 18:20:52.749700  310776 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:20:52.749995  310776 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-821769"
	I1010 18:20:52.751220  310776 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-821769"
	I1010 18:20:52.751275  310776 host.go:66] Checking if "default-k8s-diff-port-821769" exists ...
	I1010 18:20:52.750124  310776 config.go:182] Loaded profile config "default-k8s-diff-port-821769": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:20:52.750164  310776 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:20:52.751814  310776 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:20:52.754296  310776 out.go:179] * Verifying Kubernetes components...
	I1010 18:20:52.757073  310776 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:20:52.784878  310776 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-821769"
	I1010 18:20:52.784930  310776 host.go:66] Checking if "default-k8s-diff-port-821769" exists ...
	I1010 18:20:52.785459  310776 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:20:52.789598  310776 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:20:51.884191  315243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.006198253s)
	I1010 18:20:53.049905  315243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.14934381s)
	I1010 18:20:53.050041  315243 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.967926322s)
	I1010 18:20:53.050251  315243 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.321637338s)
	I1010 18:20:53.050277  315243 api_server.go:72] duration metric: took 3.4574213s to wait for apiserver process to appear ...
	I1010 18:20:53.050285  315243 api_server.go:88] waiting for apiserver healthz status ...
	I1010 18:20:53.050312  315243 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1010 18:20:53.052034  315243 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-472518 addons enable metrics-server
	
	I1010 18:20:53.053389  315243 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1010 18:20:51.575500  316039 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1010 18:20:51.575526  316039 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1010 18:20:51.575614  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:20:51.610790  316039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/no-preload-556024/id_rsa Username:docker}
	I1010 18:20:51.615370  316039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/no-preload-556024/id_rsa Username:docker}
	I1010 18:20:51.619423  316039 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 18:20:51.619520  316039 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 18:20:51.619582  316039 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:20:51.653223  316039 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/no-preload-556024/id_rsa Username:docker}
	I1010 18:20:51.773499  316039 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:20:51.814711  316039 node_ready.go:35] waiting up to 6m0s for node "no-preload-556024" to be "Ready" ...
	I1010 18:20:51.914904  316039 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1010 18:20:51.914932  316039 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1010 18:20:51.923500  316039 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:20:51.949787  316039 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 18:20:51.968366  316039 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1010 18:20:51.968396  316039 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1010 18:20:52.039689  316039 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1010 18:20:52.039716  316039 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1010 18:20:52.098625  316039 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1010 18:20:52.098653  316039 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1010 18:20:52.167741  316039 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1010 18:20:52.167801  316039 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1010 18:20:52.219328  316039 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1010 18:20:52.219352  316039 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1010 18:20:52.265308  316039 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1010 18:20:52.265341  316039 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1010 18:20:52.313716  316039 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1010 18:20:52.313766  316039 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1010 18:20:52.352592  316039 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1010 18:20:52.352644  316039 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1010 18:20:52.387452  316039 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1010 18:20:52.790760  310776 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:20:52.790790  310776 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 18:20:52.790870  310776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:20:52.821845  310776 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 18:20:52.821873  310776 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 18:20:52.821928  310776 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:20:52.827947  310776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:20:52.860145  310776 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:20:52.948729  310776 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1010 18:20:52.991756  310776 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:20:53.123700  310776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 18:20:53.139884  310776 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:20:53.325308  310776 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1010 18:20:53.330566  310776 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-821769" to be "Ready" ...
	I1010 18:20:53.592278  310776 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1010 18:20:50.034107  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	W1010 18:20:52.042686  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	I1010 18:20:54.467305  316039 node_ready.go:49] node "no-preload-556024" is "Ready"
	I1010 18:20:54.467335  316039 node_ready.go:38] duration metric: took 2.652575598s for node "no-preload-556024" to be "Ready" ...
	I1010 18:20:54.467351  316039 api_server.go:52] waiting for apiserver process to appear ...
	I1010 18:20:54.467400  316039 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 18:20:55.159684  316039 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.236143034s)
	I1010 18:20:55.159770  316039 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.209938968s)
	I1010 18:20:55.159920  316039 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.772426078s)
	I1010 18:20:55.159956  316039 api_server.go:72] duration metric: took 3.630212814s to wait for apiserver process to appear ...
	I1010 18:20:55.159971  316039 api_server.go:88] waiting for apiserver healthz status ...
	I1010 18:20:55.159989  316039 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1010 18:20:55.165079  316039 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 18:20:55.165108  316039 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 18:20:55.171486  316039 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-556024 addons enable metrics-server
	
	I1010 18:20:55.172798  316039 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1010 18:20:53.593383  310776 addons.go:514] duration metric: took 844.300192ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1010 18:20:53.831818  310776 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-821769" context rescaled to 1 replicas
	W1010 18:20:55.334794  310776 node_ready.go:57] node "default-k8s-diff-port-821769" has "Ready":"False" status (will retry)
	I1010 18:20:53.054435  315243 addons.go:514] duration metric: took 3.461533728s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1010 18:20:53.058403  315243 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 18:20:53.058478  315243 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 18:20:53.551135  315243 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1010 18:20:53.557162  315243 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1010 18:20:53.558501  315243 api_server.go:141] control plane version: v1.34.1
	I1010 18:20:53.558524  315243 api_server.go:131] duration metric: took 508.226677ms to wait for apiserver health ...
	I1010 18:20:53.558535  315243 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 18:20:53.563761  315243 system_pods.go:59] 8 kube-system pods found
	I1010 18:20:53.563802  315243 system_pods.go:61] "coredns-66bc5c9577-hrcxc" [98494133-86f7-4d52-9de0-1b648c4e1eac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:20:53.563840  315243 system_pods.go:61] "etcd-embed-certs-472518" [ef258b42-940e-4df8-bda7-2abda18693ec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 18:20:53.563851  315243 system_pods.go:61] "kindnet-kpr69" [a2bc6e25-f261-43aa-b10b-35757900e93b] Running
	I1010 18:20:53.563861  315243 system_pods.go:61] "kube-apiserver-embed-certs-472518" [d3c6aec3-5dbe-4bda-a057-5ac1cacd6dc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 18:20:53.563870  315243 system_pods.go:61] "kube-controller-manager-embed-certs-472518" [35d677fb-3f5f-4b3e-8175-60234a80c67e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 18:20:53.563877  315243 system_pods.go:61] "kube-proxy-bq985" [e2d6bf76-4b03-4118-b61b-605d27646095] Running
	I1010 18:20:53.563888  315243 system_pods.go:61] "kube-scheduler-embed-certs-472518" [7ebab2fe-6192-45eb-80a1-a169ea655e6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 18:20:53.563912  315243 system_pods.go:61] "storage-provisioner" [3237266d-6c19-4af5-aef2-8d99c561d535] Running
	I1010 18:20:53.563925  315243 system_pods.go:74] duration metric: took 5.382708ms to wait for pod list to return data ...
	I1010 18:20:53.563947  315243 default_sa.go:34] waiting for default service account to be created ...
	I1010 18:20:53.566753  315243 default_sa.go:45] found service account: "default"
	I1010 18:20:53.566775  315243 default_sa.go:55] duration metric: took 2.816607ms for default service account to be created ...
	I1010 18:20:53.566784  315243 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 18:20:53.569996  315243 system_pods.go:86] 8 kube-system pods found
	I1010 18:20:53.570035  315243 system_pods.go:89] "coredns-66bc5c9577-hrcxc" [98494133-86f7-4d52-9de0-1b648c4e1eac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:20:53.570047  315243 system_pods.go:89] "etcd-embed-certs-472518" [ef258b42-940e-4df8-bda7-2abda18693ec] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 18:20:53.570092  315243 system_pods.go:89] "kindnet-kpr69" [a2bc6e25-f261-43aa-b10b-35757900e93b] Running
	I1010 18:20:53.570102  315243 system_pods.go:89] "kube-apiserver-embed-certs-472518" [d3c6aec3-5dbe-4bda-a057-5ac1cacd6dc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 18:20:53.570118  315243 system_pods.go:89] "kube-controller-manager-embed-certs-472518" [35d677fb-3f5f-4b3e-8175-60234a80c67e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 18:20:53.570132  315243 system_pods.go:89] "kube-proxy-bq985" [e2d6bf76-4b03-4118-b61b-605d27646095] Running
	I1010 18:20:53.570140  315243 system_pods.go:89] "kube-scheduler-embed-certs-472518" [7ebab2fe-6192-45eb-80a1-a169ea655e6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 18:20:53.570145  315243 system_pods.go:89] "storage-provisioner" [3237266d-6c19-4af5-aef2-8d99c561d535] Running
	I1010 18:20:53.570154  315243 system_pods.go:126] duration metric: took 3.363508ms to wait for k8s-apps to be running ...
	I1010 18:20:53.570169  315243 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 18:20:53.570223  315243 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:20:53.589472  315243 system_svc.go:56] duration metric: took 19.294939ms WaitForService to wait for kubelet
	I1010 18:20:53.589498  315243 kubeadm.go:586] duration metric: took 3.99664162s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:20:53.589514  315243 node_conditions.go:102] verifying NodePressure condition ...
	I1010 18:20:53.593679  315243 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1010 18:20:53.593708  315243 node_conditions.go:123] node cpu capacity is 8
	I1010 18:20:53.593724  315243 node_conditions.go:105] duration metric: took 4.204587ms to run NodePressure ...
	I1010 18:20:53.593743  315243 start.go:241] waiting for startup goroutines ...
	I1010 18:20:53.593753  315243 start.go:246] waiting for cluster config update ...
	I1010 18:20:53.593767  315243 start.go:255] writing updated cluster config ...
	I1010 18:20:53.594097  315243 ssh_runner.go:195] Run: rm -f paused
	I1010 18:20:53.599326  315243 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 18:20:53.605128  315243 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hrcxc" in "kube-system" namespace to be "Ready" or be gone ...
	W1010 18:20:55.615427  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	I1010 18:20:55.173737  316039 addons.go:514] duration metric: took 3.644339704s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1010 18:20:55.660767  316039 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1010 18:20:55.667019  316039 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 18:20:55.667122  316039 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 18:20:56.160831  316039 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1010 18:20:56.166112  316039 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1010 18:20:56.167511  316039 api_server.go:141] control plane version: v1.34.1
	I1010 18:20:56.167538  316039 api_server.go:131] duration metric: took 1.007560189s to wait for apiserver health ...
	I1010 18:20:56.167549  316039 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 18:20:56.171980  316039 system_pods.go:59] 8 kube-system pods found
	I1010 18:20:56.172028  316039 system_pods.go:61] "coredns-66bc5c9577-wpsrd" [316be091-2de7-417c-b44b-1d26108e3ed3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:20:56.172041  316039 system_pods.go:61] "etcd-no-preload-556024" [0f8f77e3-e838-4f27-9f17-2cd264198574] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 18:20:56.172063  316039 system_pods.go:61] "kindnet-wsk6h" [71384861-5289-4d2b-8d62-b7d2c27d86b8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1010 18:20:56.172073  316039 system_pods.go:61] "kube-apiserver-no-preload-556024" [7efe66ae-83bf-4ea5-a271-d8e944f74053] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 18:20:56.172083  316039 system_pods.go:61] "kube-controller-manager-no-preload-556024" [9e7fbd67-ce38-425d-b80d-b8ff3748fa70] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 18:20:56.172091  316039 system_pods.go:61] "kube-proxy-frchp" [3457ebf4-7608-4c78-b8dc-3a92a2fb32ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1010 18:20:56.172099  316039 system_pods.go:61] "kube-scheduler-no-preload-556024" [c6fb51f0-cf8d-4a56-aba5-95aff4190b44] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 18:20:56.172107  316039 system_pods.go:61] "storage-provisioner" [42a21c5e-4318-43f7-8d2a-dc62676b17c2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 18:20:56.172115  316039 system_pods.go:74] duration metric: took 4.558605ms to wait for pod list to return data ...
	I1010 18:20:56.172125  316039 default_sa.go:34] waiting for default service account to be created ...
	I1010 18:20:56.174926  316039 default_sa.go:45] found service account: "default"
	I1010 18:20:56.174945  316039 default_sa.go:55] duration metric: took 2.814097ms for default service account to be created ...
	I1010 18:20:56.174954  316039 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 18:20:56.177615  316039 system_pods.go:86] 8 kube-system pods found
	I1010 18:20:56.177644  316039 system_pods.go:89] "coredns-66bc5c9577-wpsrd" [316be091-2de7-417c-b44b-1d26108e3ed3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:20:56.177653  316039 system_pods.go:89] "etcd-no-preload-556024" [0f8f77e3-e838-4f27-9f17-2cd264198574] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 18:20:56.177664  316039 system_pods.go:89] "kindnet-wsk6h" [71384861-5289-4d2b-8d62-b7d2c27d86b8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1010 18:20:56.177673  316039 system_pods.go:89] "kube-apiserver-no-preload-556024" [7efe66ae-83bf-4ea5-a271-d8e944f74053] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 18:20:56.177683  316039 system_pods.go:89] "kube-controller-manager-no-preload-556024" [9e7fbd67-ce38-425d-b80d-b8ff3748fa70] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 18:20:56.177697  316039 system_pods.go:89] "kube-proxy-frchp" [3457ebf4-7608-4c78-b8dc-3a92a2fb32ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1010 18:20:56.177706  316039 system_pods.go:89] "kube-scheduler-no-preload-556024" [c6fb51f0-cf8d-4a56-aba5-95aff4190b44] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 18:20:56.177717  316039 system_pods.go:89] "storage-provisioner" [42a21c5e-4318-43f7-8d2a-dc62676b17c2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 18:20:56.177725  316039 system_pods.go:126] duration metric: took 2.765119ms to wait for k8s-apps to be running ...
	I1010 18:20:56.177734  316039 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 18:20:56.177779  316039 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:20:56.195926  316039 system_svc.go:56] duration metric: took 18.185245ms WaitForService to wait for kubelet
	I1010 18:20:56.195953  316039 kubeadm.go:586] duration metric: took 4.666211157s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:20:56.195977  316039 node_conditions.go:102] verifying NodePressure condition ...
	I1010 18:20:56.199540  316039 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1010 18:20:56.199578  316039 node_conditions.go:123] node cpu capacity is 8
	I1010 18:20:56.199596  316039 node_conditions.go:105] duration metric: took 3.612981ms to run NodePressure ...
	I1010 18:20:56.199610  316039 start.go:241] waiting for startup goroutines ...
	I1010 18:20:56.199621  316039 start.go:246] waiting for cluster config update ...
	I1010 18:20:56.199635  316039 start.go:255] writing updated cluster config ...
	I1010 18:20:56.199914  316039 ssh_runner.go:195] Run: rm -f paused
	I1010 18:20:56.205011  316039 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 18:20:56.210819  316039 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wpsrd" in "kube-system" namespace to be "Ready" or be gone ...
	W1010 18:20:58.216565  316039 pod_ready.go:104] pod "coredns-66bc5c9577-wpsrd" is not "Ready", error: <nil>
	W1010 18:20:54.534826  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	W1010 18:20:56.537833  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	W1010 18:20:57.839422  310776 node_ready.go:57] node "default-k8s-diff-port-821769" has "Ready":"False" status (will retry)
	W1010 18:21:00.334364  310776 node_ready.go:57] node "default-k8s-diff-port-821769" has "Ready":"False" status (will retry)
	W1010 18:20:58.112627  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	W1010 18:21:00.611479  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	W1010 18:21:00.219211  316039 pod_ready.go:104] pod "coredns-66bc5c9577-wpsrd" is not "Ready", error: <nil>
	W1010 18:21:02.732835  316039 pod_ready.go:104] pod "coredns-66bc5c9577-wpsrd" is not "Ready", error: <nil>
	W1010 18:20:59.033775  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	W1010 18:21:01.532897  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	W1010 18:21:02.334772  310776 node_ready.go:57] node "default-k8s-diff-port-821769" has "Ready":"False" status (will retry)
	I1010 18:21:04.334550  310776 node_ready.go:49] node "default-k8s-diff-port-821769" is "Ready"
	I1010 18:21:04.334584  310776 node_ready.go:38] duration metric: took 11.003942186s for node "default-k8s-diff-port-821769" to be "Ready" ...
	I1010 18:21:04.334602  310776 api_server.go:52] waiting for apiserver process to appear ...
	I1010 18:21:04.334661  310776 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 18:21:04.352414  310776 api_server.go:72] duration metric: took 11.600692282s to wait for apiserver process to appear ...
	I1010 18:21:04.352440  310776 api_server.go:88] waiting for apiserver healthz status ...
	I1010 18:21:04.352461  310776 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1010 18:21:04.357202  310776 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1010 18:21:04.358448  310776 api_server.go:141] control plane version: v1.34.1
	I1010 18:21:04.358475  310776 api_server.go:131] duration metric: took 6.027777ms to wait for apiserver health ...
	I1010 18:21:04.358486  310776 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 18:21:04.362525  310776 system_pods.go:59] 8 kube-system pods found
	I1010 18:21:04.362567  310776 system_pods.go:61] "coredns-66bc5c9577-wrz5v" [7a6485d8-d7c2-4cdc-a015-68b7754aa396] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:21:04.362576  310776 system_pods.go:61] "etcd-default-k8s-diff-port-821769" [b5edacc6-aaa2-4ee9-b0b1-330ce9248047] Running
	I1010 18:21:04.362584  310776 system_pods.go:61] "kindnet-4w475" [f4b100ab-44a4-49d1-bae7-d7dbdd293a80] Running
	I1010 18:21:04.362590  310776 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-821769" [d5671f82-586b-4ce8-954c-d0779d0759ae] Running
	I1010 18:21:04.362597  310776 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-821769" [04b0efc5-436e-4138-bbbc-ecb536f5118e] Running
	I1010 18:21:04.362604  310776 system_pods.go:61] "kube-proxy-h2mzf" [0598db95-c0fc-49b8-a15b-26e4f96ed49c] Running
	I1010 18:21:04.362609  310776 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-821769" [e99518f9-57ed-46f5-b338-ba281829307d] Running
	I1010 18:21:04.362621  310776 system_pods.go:61] "storage-provisioner" [63ba31a4-0bea-47b8-92f4-453fa7d83aea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 18:21:04.362634  310776 system_pods.go:74] duration metric: took 4.14166ms to wait for pod list to return data ...
	I1010 18:21:04.362650  310776 default_sa.go:34] waiting for default service account to be created ...
	I1010 18:21:04.365765  310776 default_sa.go:45] found service account: "default"
	I1010 18:21:04.365790  310776 default_sa.go:55] duration metric: took 3.13114ms for default service account to be created ...
	I1010 18:21:04.365801  310776 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 18:21:04.368917  310776 system_pods.go:86] 8 kube-system pods found
	I1010 18:21:04.368948  310776 system_pods.go:89] "coredns-66bc5c9577-wrz5v" [7a6485d8-d7c2-4cdc-a015-68b7754aa396] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:21:04.368953  310776 system_pods.go:89] "etcd-default-k8s-diff-port-821769" [b5edacc6-aaa2-4ee9-b0b1-330ce9248047] Running
	I1010 18:21:04.368962  310776 system_pods.go:89] "kindnet-4w475" [f4b100ab-44a4-49d1-bae7-d7dbdd293a80] Running
	I1010 18:21:04.368966  310776 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-821769" [d5671f82-586b-4ce8-954c-d0779d0759ae] Running
	I1010 18:21:04.368970  310776 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-821769" [04b0efc5-436e-4138-bbbc-ecb536f5118e] Running
	I1010 18:21:04.368973  310776 system_pods.go:89] "kube-proxy-h2mzf" [0598db95-c0fc-49b8-a15b-26e4f96ed49c] Running
	I1010 18:21:04.368977  310776 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-821769" [e99518f9-57ed-46f5-b338-ba281829307d] Running
	I1010 18:21:04.368982  310776 system_pods.go:89] "storage-provisioner" [63ba31a4-0bea-47b8-92f4-453fa7d83aea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 18:21:04.369018  310776 retry.go:31] will retry after 236.267744ms: missing components: kube-dns
	I1010 18:21:04.617498  310776 system_pods.go:86] 8 kube-system pods found
	I1010 18:21:04.617554  310776 system_pods.go:89] "coredns-66bc5c9577-wrz5v" [7a6485d8-d7c2-4cdc-a015-68b7754aa396] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:21:04.617563  310776 system_pods.go:89] "etcd-default-k8s-diff-port-821769" [b5edacc6-aaa2-4ee9-b0b1-330ce9248047] Running
	I1010 18:21:04.617572  310776 system_pods.go:89] "kindnet-4w475" [f4b100ab-44a4-49d1-bae7-d7dbdd293a80] Running
	I1010 18:21:04.617577  310776 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-821769" [d5671f82-586b-4ce8-954c-d0779d0759ae] Running
	I1010 18:21:04.617583  310776 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-821769" [04b0efc5-436e-4138-bbbc-ecb536f5118e] Running
	I1010 18:21:04.617588  310776 system_pods.go:89] "kube-proxy-h2mzf" [0598db95-c0fc-49b8-a15b-26e4f96ed49c] Running
	I1010 18:21:04.617593  310776 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-821769" [e99518f9-57ed-46f5-b338-ba281829307d] Running
	I1010 18:21:04.617600  310776 system_pods.go:89] "storage-provisioner" [63ba31a4-0bea-47b8-92f4-453fa7d83aea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 18:21:04.617679  310776 retry.go:31] will retry after 358.019281ms: missing components: kube-dns
	I1010 18:21:04.980610  310776 system_pods.go:86] 8 kube-system pods found
	I1010 18:21:04.980648  310776 system_pods.go:89] "coredns-66bc5c9577-wrz5v" [7a6485d8-d7c2-4cdc-a015-68b7754aa396] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:21:04.980657  310776 system_pods.go:89] "etcd-default-k8s-diff-port-821769" [b5edacc6-aaa2-4ee9-b0b1-330ce9248047] Running
	I1010 18:21:04.980665  310776 system_pods.go:89] "kindnet-4w475" [f4b100ab-44a4-49d1-bae7-d7dbdd293a80] Running
	I1010 18:21:04.980671  310776 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-821769" [d5671f82-586b-4ce8-954c-d0779d0759ae] Running
	I1010 18:21:04.980677  310776 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-821769" [04b0efc5-436e-4138-bbbc-ecb536f5118e] Running
	I1010 18:21:04.980682  310776 system_pods.go:89] "kube-proxy-h2mzf" [0598db95-c0fc-49b8-a15b-26e4f96ed49c] Running
	I1010 18:21:04.980691  310776 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-821769" [e99518f9-57ed-46f5-b338-ba281829307d] Running
	I1010 18:21:04.980698  310776 system_pods.go:89] "storage-provisioner" [63ba31a4-0bea-47b8-92f4-453fa7d83aea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1010 18:21:04.980718  310776 retry.go:31] will retry after 460.448201ms: missing components: kube-dns
	I1010 18:21:05.476108  310776 system_pods.go:86] 8 kube-system pods found
	I1010 18:21:05.476135  310776 system_pods.go:89] "coredns-66bc5c9577-wrz5v" [7a6485d8-d7c2-4cdc-a015-68b7754aa396] Running
	I1010 18:21:05.476141  310776 system_pods.go:89] "etcd-default-k8s-diff-port-821769" [b5edacc6-aaa2-4ee9-b0b1-330ce9248047] Running
	I1010 18:21:05.476147  310776 system_pods.go:89] "kindnet-4w475" [f4b100ab-44a4-49d1-bae7-d7dbdd293a80] Running
	I1010 18:21:05.476153  310776 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-821769" [d5671f82-586b-4ce8-954c-d0779d0759ae] Running
	I1010 18:21:05.476158  310776 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-821769" [04b0efc5-436e-4138-bbbc-ecb536f5118e] Running
	I1010 18:21:05.476164  310776 system_pods.go:89] "kube-proxy-h2mzf" [0598db95-c0fc-49b8-a15b-26e4f96ed49c] Running
	I1010 18:21:05.476169  310776 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-821769" [e99518f9-57ed-46f5-b338-ba281829307d] Running
	I1010 18:21:05.476175  310776 system_pods.go:89] "storage-provisioner" [63ba31a4-0bea-47b8-92f4-453fa7d83aea] Running
	I1010 18:21:05.476185  310776 system_pods.go:126] duration metric: took 1.110376994s to wait for k8s-apps to be running ...
	I1010 18:21:05.476203  310776 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 18:21:05.476263  310776 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:21:05.491314  310776 system_svc.go:56] duration metric: took 15.10412ms WaitForService to wait for kubelet
	I1010 18:21:05.491339  310776 kubeadm.go:586] duration metric: took 12.739624944s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:21:05.491357  310776 node_conditions.go:102] verifying NodePressure condition ...
	I1010 18:21:05.494549  310776 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1010 18:21:05.494574  310776 node_conditions.go:123] node cpu capacity is 8
	I1010 18:21:05.494597  310776 node_conditions.go:105] duration metric: took 3.235725ms to run NodePressure ...
	I1010 18:21:05.494610  310776 start.go:241] waiting for startup goroutines ...
	I1010 18:21:05.494620  310776 start.go:246] waiting for cluster config update ...
	I1010 18:21:05.494635  310776 start.go:255] writing updated cluster config ...
	I1010 18:21:05.505739  310776 ssh_runner.go:195] Run: rm -f paused
	I1010 18:21:05.510435  310776 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 18:21:05.514397  310776 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wrz5v" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:05.519411  310776 pod_ready.go:94] pod "coredns-66bc5c9577-wrz5v" is "Ready"
	I1010 18:21:05.519440  310776 pod_ready.go:86] duration metric: took 5.021224ms for pod "coredns-66bc5c9577-wrz5v" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:05.521798  310776 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:05.526425  310776 pod_ready.go:94] pod "etcd-default-k8s-diff-port-821769" is "Ready"
	I1010 18:21:05.526453  310776 pod_ready.go:86] duration metric: took 4.627916ms for pod "etcd-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:05.528777  310776 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:05.533585  310776 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-821769" is "Ready"
	I1010 18:21:05.533610  310776 pod_ready.go:86] duration metric: took 4.808877ms for pod "kube-apiserver-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:05.535771  310776 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:05.915199  310776 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-821769" is "Ready"
	I1010 18:21:05.915227  310776 pod_ready.go:86] duration metric: took 379.433579ms for pod "kube-controller-manager-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:06.115325  310776 pod_ready.go:83] waiting for pod "kube-proxy-h2mzf" in "kube-system" namespace to be "Ready" or be gone ...
	W1010 18:21:02.613407  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	W1010 18:21:04.613477  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	I1010 18:21:06.515281  310776 pod_ready.go:94] pod "kube-proxy-h2mzf" is "Ready"
	I1010 18:21:06.515310  310776 pod_ready.go:86] duration metric: took 399.959779ms for pod "kube-proxy-h2mzf" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:06.716017  310776 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:07.115133  310776 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-821769" is "Ready"
	I1010 18:21:07.115162  310776 pod_ready.go:86] duration metric: took 399.114099ms for pod "kube-scheduler-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:07.115176  310776 pod_ready.go:40] duration metric: took 1.604699188s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 18:21:07.163929  310776 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1010 18:21:07.192097  310776 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-821769" cluster and "default" namespace by default
	W1010 18:21:05.217220  316039 pod_ready.go:104] pod "coredns-66bc5c9577-wpsrd" is not "Ready", error: <nil>
	W1010 18:21:07.716808  316039 pod_ready.go:104] pod "coredns-66bc5c9577-wpsrd" is not "Ready", error: <nil>
	W1010 18:21:04.032734  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	W1010 18:21:06.531020  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	W1010 18:21:08.531357  309154 pod_ready.go:104] pod "coredns-5dd5756b68-qfwck" is not "Ready", error: <nil>
	I1010 18:21:09.532675  309154 pod_ready.go:94] pod "coredns-5dd5756b68-qfwck" is "Ready"
	I1010 18:21:09.532706  309154 pod_ready.go:86] duration metric: took 32.006855812s for pod "coredns-5dd5756b68-qfwck" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:09.535886  309154 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:09.540776  309154 pod_ready.go:94] pod "etcd-old-k8s-version-141193" is "Ready"
	I1010 18:21:09.540797  309154 pod_ready.go:86] duration metric: took 4.887324ms for pod "etcd-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:09.543453  309154 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:09.547188  309154 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-141193" is "Ready"
	I1010 18:21:09.547212  309154 pod_ready.go:86] duration metric: took 3.738135ms for pod "kube-apiserver-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:09.549745  309154 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:09.730359  309154 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-141193" is "Ready"
	I1010 18:21:09.730391  309154 pod_ready.go:86] duration metric: took 180.622284ms for pod "kube-controller-manager-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:09.930224  309154 pod_ready.go:83] waiting for pod "kube-proxy-n9klp" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:10.329749  309154 pod_ready.go:94] pod "kube-proxy-n9klp" is "Ready"
	I1010 18:21:10.329777  309154 pod_ready.go:86] duration metric: took 399.527981ms for pod "kube-proxy-n9klp" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:10.533434  309154 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:10.930255  309154 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-141193" is "Ready"
	I1010 18:21:10.930280  309154 pod_ready.go:86] duration metric: took 396.81759ms for pod "kube-scheduler-old-k8s-version-141193" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:21:10.930291  309154 pod_ready.go:40] duration metric: took 33.409574947s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 18:21:10.976268  309154 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1010 18:21:10.978153  309154 out.go:203] 
	W1010 18:21:10.979362  309154 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1010 18:21:10.980507  309154 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1010 18:21:10.981654  309154 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-141193" cluster and "default" namespace by default
	W1010 18:21:07.110875  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	W1010 18:21:09.610687  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	W1010 18:21:11.612648  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	W1010 18:21:09.717125  316039 pod_ready.go:104] pod "coredns-66bc5c9577-wpsrd" is not "Ready", error: <nil>
	W1010 18:21:12.215991  316039 pod_ready.go:104] pod "coredns-66bc5c9577-wpsrd" is not "Ready", error: <nil>
	W1010 18:21:14.111016  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	W1010 18:21:16.112160  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	W1010 18:21:14.715907  316039 pod_ready.go:104] pod "coredns-66bc5c9577-wpsrd" is not "Ready", error: <nil>
	W1010 18:21:16.717135  316039 pod_ready.go:104] pod "coredns-66bc5c9577-wpsrd" is not "Ready", error: <nil>
	W1010 18:21:18.610479  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	W1010 18:21:21.111582  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	W1010 18:21:19.216867  316039 pod_ready.go:104] pod "coredns-66bc5c9577-wpsrd" is not "Ready", error: <nil>
	W1010 18:21:21.716430  316039 pod_ready.go:104] pod "coredns-66bc5c9577-wpsrd" is not "Ready", error: <nil>
	W1010 18:21:23.610211  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	W1010 18:21:25.611266  315243 pod_ready.go:104] pod "coredns-66bc5c9577-hrcxc" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 10 18:20:58 old-k8s-version-141193 crio[563]: time="2025-10-10T18:20:58.082875577Z" level=info msg="Started container" PID=1724 containerID=317b5fbf87bfe30a8ecf7698846a3261f5082a3c6dbc1013c34d952ea1f50734 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nsnjs/dashboard-metrics-scraper id=e71daf43-326b-44cb-bd8a-d3eb2c862b08 name=/runtime.v1.RuntimeService/StartContainer sandboxID=09605202217f839452fa39403de405feef1641f993649b6956e636d1bd9f8906
	Oct 10 18:20:59 old-k8s-version-141193 crio[563]: time="2025-10-10T18:20:59.036209978Z" level=info msg="Removing container: 6ebe203dda94ad2ffbefc3adcdc8edca63de95384ffb197e95d7d948c64a7df8" id=8624b80f-3691-46d6-9ee1-96808defb8e5 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 10 18:20:59 old-k8s-version-141193 crio[563]: time="2025-10-10T18:20:59.047819255Z" level=info msg="Removed container 6ebe203dda94ad2ffbefc3adcdc8edca63de95384ffb197e95d7d948c64a7df8: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nsnjs/dashboard-metrics-scraper" id=8624b80f-3691-46d6-9ee1-96808defb8e5 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 10 18:21:08 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:08.058459186Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=69827828-75f9-42b6-945c-1c687e831f11 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:21:08 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:08.059379803Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=407fc15b-6efb-40db-8f13-dabf56e6993d name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:21:08 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:08.06037917Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=cc36a961-c96e-44ff-abc5-33a69ca73fd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:21:08 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:08.060651335Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:08 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:08.064669479Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:08 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:08.064856318Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/35f4e042d921b551dca48577da31839d46eecb60eb815956dca693433218a3d0/merged/etc/passwd: no such file or directory"
	Oct 10 18:21:08 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:08.064888869Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/35f4e042d921b551dca48577da31839d46eecb60eb815956dca693433218a3d0/merged/etc/group: no such file or directory"
	Oct 10 18:21:08 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:08.065182175Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:08 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:08.097652948Z" level=info msg="Created container de2790671165db40044a25122f17100a12947ed8065bf6e2ed1ff37219e247dd: kube-system/storage-provisioner/storage-provisioner" id=cc36a961-c96e-44ff-abc5-33a69ca73fd0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:21:08 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:08.09836933Z" level=info msg="Starting container: de2790671165db40044a25122f17100a12947ed8065bf6e2ed1ff37219e247dd" id=2d6f1078-d156-45b2-bc0c-ea5118de7e93 name=/runtime.v1.RuntimeService/StartContainer
	Oct 10 18:21:08 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:08.100139132Z" level=info msg="Started container" PID=1740 containerID=de2790671165db40044a25122f17100a12947ed8065bf6e2ed1ff37219e247dd description=kube-system/storage-provisioner/storage-provisioner id=2d6f1078-d156-45b2-bc0c-ea5118de7e93 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cfab681e276ea8331c4efdc53f86e44f3bf06cf39a7ee8394181e981af34fd2e
	Oct 10 18:21:14 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:14.93949691Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=fe8f9be7-f063-4fe6-b66c-2f19de11f845 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:21:14 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:14.940370725Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=100faed2-1e2b-4ce9-9362-d348e020fde3 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:21:14 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:14.941338354Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nsnjs/dashboard-metrics-scraper" id=ce0380bd-c295-4fd1-9595-bc4cca79cfdc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:21:14 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:14.941573634Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:14 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:14.948452847Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:14 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:14.949146439Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:14 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:14.972903908Z" level=info msg="Created container 1667847f042344bbbf08942c8b74a2e8385d5dfaf27c738cc310c23092d32a3d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nsnjs/dashboard-metrics-scraper" id=ce0380bd-c295-4fd1-9595-bc4cca79cfdc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:21:14 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:14.973521234Z" level=info msg="Starting container: 1667847f042344bbbf08942c8b74a2e8385d5dfaf27c738cc310c23092d32a3d" id=50a55c86-24fc-47ed-a2d2-ef28904341fe name=/runtime.v1.RuntimeService/StartContainer
	Oct 10 18:21:14 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:14.975184688Z" level=info msg="Started container" PID=1777 containerID=1667847f042344bbbf08942c8b74a2e8385d5dfaf27c738cc310c23092d32a3d description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nsnjs/dashboard-metrics-scraper id=50a55c86-24fc-47ed-a2d2-ef28904341fe name=/runtime.v1.RuntimeService/StartContainer sandboxID=09605202217f839452fa39403de405feef1641f993649b6956e636d1bd9f8906
	Oct 10 18:21:15 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:15.076925376Z" level=info msg="Removing container: 317b5fbf87bfe30a8ecf7698846a3261f5082a3c6dbc1013c34d952ea1f50734" id=3293889e-fd85-45bc-8ce0-1da6f99284ba name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 10 18:21:15 old-k8s-version-141193 crio[563]: time="2025-10-10T18:21:15.087039035Z" level=info msg="Removed container 317b5fbf87bfe30a8ecf7698846a3261f5082a3c6dbc1013c34d952ea1f50734: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nsnjs/dashboard-metrics-scraper" id=3293889e-fd85-45bc-8ce0-1da6f99284ba name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	1667847f04234       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           12 seconds ago      Exited              dashboard-metrics-scraper   2                   09605202217f8       dashboard-metrics-scraper-5f989dc9cf-nsnjs       kubernetes-dashboard
	de2790671165d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   cfab681e276ea       storage-provisioner                              kube-system
	7b7c62874a1a3       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   32 seconds ago      Running             kubernetes-dashboard        0                   49a17004ec811       kubernetes-dashboard-8694d4445c-g8lm9            kubernetes-dashboard
	2d06b9d980aa2       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   5e796db1aa438       busybox                                          default
	76851e857de85       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           50 seconds ago      Running             coredns                     0                   a1120ac22e98d       coredns-5dd5756b68-qfwck                         kube-system
	17dc2d6edfc14       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   d86d65afaac81       kindnet-wjlh2                                    kube-system
	f0a141878e079       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           50 seconds ago      Running             kube-proxy                  0                   168360b4de985       kube-proxy-n9klp                                 kube-system
	194d18ca204ba       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   cfab681e276ea       storage-provisioner                              kube-system
	35c22fae38401       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           54 seconds ago      Running             kube-controller-manager     0                   97ac1788edac4       kube-controller-manager-old-k8s-version-141193   kube-system
	40a7654c69d62       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           54 seconds ago      Running             kube-scheduler              0                   c20a7e3eb2399       kube-scheduler-old-k8s-version-141193            kube-system
	fd2510c67a243       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           54 seconds ago      Running             kube-apiserver              0                   d30dd367d921e       kube-apiserver-old-k8s-version-141193            kube-system
	3757d2bd72722       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           54 seconds ago      Running             etcd                        0                   545d8a01a07c9       etcd-old-k8s-version-141193                      kube-system
	
	
	==> coredns [76851e857de85c1d61246f777900d9a4581fca45808f5b980f367404d0d69f55] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39843 - 31746 "HINFO IN 6967103515947113627.6149114998770294594. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.027021535s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-141193
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-141193
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad692bf4ab89f0e135b80e730ae25010479ecc46
	                    minikube.k8s.io/name=old-k8s-version-141193
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_10T18_19_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 10 Oct 2025 18:19:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-141193
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 10 Oct 2025 18:21:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 10 Oct 2025 18:21:07 +0000   Fri, 10 Oct 2025 18:19:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 10 Oct 2025 18:21:07 +0000   Fri, 10 Oct 2025 18:19:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 10 Oct 2025 18:21:07 +0000   Fri, 10 Oct 2025 18:19:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 10 Oct 2025 18:21:07 +0000   Fri, 10 Oct 2025 18:19:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-141193
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 6694834041ede3e9eb1b67e168e90e0c
	  System UUID:                8f8bdf4a-f8cb-42ff-aa21-c2ad268c8723
	  Boot ID:                    830c8438-99e6-48ba-b543-66e651cad0c8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-5dd5756b68-qfwck                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-old-k8s-version-141193                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m1s
	  kube-system                 kindnet-wjlh2                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-old-k8s-version-141193             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-old-k8s-version-141193    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-n9klp                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-old-k8s-version-141193             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-nsnjs        0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-g8lm9             0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m1s               kubelet          Node old-k8s-version-141193 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s               kubelet          Node old-k8s-version-141193 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s               kubelet          Node old-k8s-version-141193 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m1s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s               node-controller  Node old-k8s-version-141193 event: Registered Node old-k8s-version-141193 in Controller
	  Normal  NodeReady                95s                kubelet          Node old-k8s-version-141193 status is now: NodeReady
	  Normal  Starting                 55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 55s)  kubelet          Node old-k8s-version-141193 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 55s)  kubelet          Node old-k8s-version-141193 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 55s)  kubelet          Node old-k8s-version-141193 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           39s                node-controller  Node old-k8s-version-141193 event: Registered Node old-k8s-version-141193 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 95 0c 3e 92 2e 08 06
	[  +0.052845] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 a5 06 76 2d e3 08 06
	[ +11.354316] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff fa c6 ff 04 55 d6 08 06
	[  +7.101927] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e6 9b 73 27 8c 80 08 06
	[  +0.000350] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 a5 06 76 2d e3 08 06
	[  +6.287191] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 27 2d 28 d6 46 08 06
	[  +0.000293] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa c6 ff 04 55 d6 08 06
	[Oct10 18:19] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 8c 22 f6 6b cf 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e 29 bf 13 20 f9 08 06
	[ +15.511156] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e d6 74 aa 27 d0 08 06
	[  +0.008495] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 af 05 d4 db d1 08 06
	[Oct10 18:20] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 0b 54 33 52 4e 08 06
	[  +0.000597] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 af 05 d4 db d1 08 06
	
	
	==> etcd [3757d2bd727229dd68d4be360086d9271d28f5c098b84264b16d8e9b1794093f] <==
	{"level":"info","ts":"2025-10-10T18:20:33.499505Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-10T18:20:33.499525Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-10T18:20:33.49971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-10T18:20:33.499812Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-10T18:20:33.499937Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-10T18:20:33.500136Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-10T18:20:33.502301Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-10T18:20:33.502594Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-10T18:20:33.502657Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-10T18:20:33.50274Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-10T18:20:33.502773Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-10T18:20:34.990013Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-10T18:20:34.990071Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-10T18:20:34.990106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-10T18:20:34.990119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-10T18:20:34.990127Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-10T18:20:34.990136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-10T18:20:34.990143Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-10T18:20:34.991452Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-141193 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-10T18:20:34.991448Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-10T18:20:34.991462Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-10T18:20:34.991659Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-10T18:20:34.991685Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-10T18:20:34.992706Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-10T18:20:34.992751Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:21:27 up  1:03,  0 user,  load average: 5.18, 4.56, 2.90
	Linux old-k8s-version-141193 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [17dc2d6edfc14bbc3aad59599c1fe778e3325320e2e82a8580a705cf10bd89fe] <==
	I1010 18:20:37.441263       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1010 18:20:37.534873       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1010 18:20:37.535081       1 main.go:148] setting mtu 1500 for CNI 
	I1010 18:20:37.535100       1 main.go:178] kindnetd IP family: "ipv4"
	I1010 18:20:37.535131       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-10T18:20:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1010 18:20:37.737516       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1010 18:20:37.737930       1 controller.go:381] "Waiting for informer caches to sync"
	I1010 18:20:37.737988       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1010 18:20:37.738411       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1010 18:20:38.138087       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1010 18:20:38.138115       1 metrics.go:72] Registering metrics
	I1010 18:20:38.138166       1 controller.go:711] "Syncing nftables rules"
	I1010 18:20:47.738272       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1010 18:20:47.738333       1 main.go:301] handling current node
	I1010 18:20:57.737420       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1010 18:20:57.737471       1 main.go:301] handling current node
	I1010 18:21:07.737815       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1010 18:21:07.737844       1 main.go:301] handling current node
	I1010 18:21:17.739148       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1010 18:21:17.739197       1 main.go:301] handling current node
	I1010 18:21:27.744167       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1010 18:21:27.744196       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fd2510c67a2437bd698c9b5bc34c054b544522802f65bf2ffc6d09e1b707e52f] <==
	I1010 18:20:35.959903       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1010 18:20:36.017786       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1010 18:20:36.023801       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1010 18:20:36.060318       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1010 18:20:36.060452       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1010 18:20:36.060340       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1010 18:20:36.061092       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1010 18:20:36.060373       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1010 18:20:36.061230       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1010 18:20:36.061353       1 aggregator.go:166] initial CRD sync complete...
	I1010 18:20:36.061416       1 autoregister_controller.go:141] Starting autoregister controller
	I1010 18:20:36.061443       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1010 18:20:36.061470       1 cache.go:39] Caches are synced for autoregister controller
	I1010 18:20:36.063549       1 shared_informer.go:318] Caches are synced for configmaps
	I1010 18:20:36.849179       1 controller.go:624] quota admission added evaluator for: namespaces
	I1010 18:20:36.880411       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1010 18:20:36.896732       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1010 18:20:36.904909       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1010 18:20:36.911814       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1010 18:20:36.953683       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.227.239"}
	I1010 18:20:36.962337       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1010 18:20:36.971813       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.21.215"}
	I1010 18:20:48.638682       1 controller.go:624] quota admission added evaluator for: endpoints
	I1010 18:20:48.788319       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1010 18:20:48.837357       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [35c22fae38401c52658935667354e9d6d1ec78136964aab98a72bf3ef5eb768f] <==
	I1010 18:20:48.793603       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1010 18:20:48.795098       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1010 18:20:48.946034       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-nsnjs"
	I1010 18:20:48.948468       1 shared_informer.go:318] Caches are synced for garbage collector
	I1010 18:20:48.948563       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1010 18:20:48.948696       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="502.101993ms"
	I1010 18:20:48.948737       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-g8lm9"
	I1010 18:20:48.949655       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="255.576µs"
	I1010 18:20:48.956488       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="163.048968ms"
	I1010 18:20:48.957815       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="162.866438ms"
	I1010 18:20:48.958429       1 shared_informer.go:318] Caches are synced for garbage collector
	I1010 18:20:48.965393       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="8.846233ms"
	I1010 18:20:48.965487       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="46.956µs"
	I1010 18:20:48.968580       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="10.706915ms"
	I1010 18:20:48.968664       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.893µs"
	I1010 18:20:48.977037       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="46.789µs"
	I1010 18:20:56.043119       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="9.974116ms"
	I1010 18:20:56.043257       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="87.621µs"
	I1010 18:20:58.042069       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="59.508µs"
	I1010 18:20:59.055831       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="84.411µs"
	I1010 18:21:00.051084       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="68.962µs"
	I1010 18:21:09.531121       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.613968ms"
	I1010 18:21:09.531252       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="85.783µs"
	I1010 18:21:15.088416       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.475µs"
	I1010 18:21:19.268911       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="91.347µs"
	
	
	==> kube-proxy [f0a141878e079b9bef80d8c836ead2aaa0e5e6f6923e15d06ab08325251c3ff9] <==
	I1010 18:20:37.339698       1 server_others.go:69] "Using iptables proxy"
	I1010 18:20:37.349336       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1010 18:20:37.367706       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1010 18:20:37.369965       1 server_others.go:152] "Using iptables Proxier"
	I1010 18:20:37.370003       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1010 18:20:37.370013       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1010 18:20:37.370123       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1010 18:20:37.370390       1 server.go:846] "Version info" version="v1.28.0"
	I1010 18:20:37.370404       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 18:20:37.371112       1 config.go:188] "Starting service config controller"
	I1010 18:20:37.371480       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1010 18:20:37.371522       1 config.go:97] "Starting endpoint slice config controller"
	I1010 18:20:37.371530       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1010 18:20:37.371859       1 config.go:315] "Starting node config controller"
	I1010 18:20:37.371872       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1010 18:20:37.472545       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1010 18:20:37.472576       1 shared_informer.go:318] Caches are synced for node config
	I1010 18:20:37.472558       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [40a7654c69d62a8d95b0f35cd0690ed73e1fdcfe1ca6c15bbfe41a93f8101259] <==
	I1010 18:20:34.136381       1 serving.go:348] Generated self-signed cert in-memory
	W1010 18:20:35.972789       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1010 18:20:35.972919       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1010 18:20:35.972941       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1010 18:20:35.972970       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1010 18:20:36.009453       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1010 18:20:36.009519       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 18:20:36.014065       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1010 18:20:36.014107       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1010 18:20:36.017092       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1010 18:20:36.017198       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1010 18:20:36.115268       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 10 18:20:48 old-k8s-version-141193 kubelet[724]: I1010 18:20:48.955450     724 topology_manager.go:215] "Topology Admit Handler" podUID="b471ecc7-c8aa-40fd-bbe2-b16f4f36530f" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-g8lm9"
	Oct 10 18:20:49 old-k8s-version-141193 kubelet[724]: I1010 18:20:49.074848     724 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzpnq\" (UniqueName: \"kubernetes.io/projected/b471ecc7-c8aa-40fd-bbe2-b16f4f36530f-kube-api-access-vzpnq\") pod \"kubernetes-dashboard-8694d4445c-g8lm9\" (UID: \"b471ecc7-c8aa-40fd-bbe2-b16f4f36530f\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-g8lm9"
	Oct 10 18:20:49 old-k8s-version-141193 kubelet[724]: I1010 18:20:49.074907     724 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdsds\" (UniqueName: \"kubernetes.io/projected/91f3f2ac-4ba3-40e9-8173-386bdbdd8dae-kube-api-access-jdsds\") pod \"dashboard-metrics-scraper-5f989dc9cf-nsnjs\" (UID: \"91f3f2ac-4ba3-40e9-8173-386bdbdd8dae\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nsnjs"
	Oct 10 18:20:49 old-k8s-version-141193 kubelet[724]: I1010 18:20:49.074947     724 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b471ecc7-c8aa-40fd-bbe2-b16f4f36530f-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-g8lm9\" (UID: \"b471ecc7-c8aa-40fd-bbe2-b16f4f36530f\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-g8lm9"
	Oct 10 18:20:49 old-k8s-version-141193 kubelet[724]: I1010 18:20:49.075090     724 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/91f3f2ac-4ba3-40e9-8173-386bdbdd8dae-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-nsnjs\" (UID: \"91f3f2ac-4ba3-40e9-8173-386bdbdd8dae\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nsnjs"
	Oct 10 18:20:58 old-k8s-version-141193 kubelet[724]: I1010 18:20:58.028630     724 scope.go:117] "RemoveContainer" containerID="6ebe203dda94ad2ffbefc3adcdc8edca63de95384ffb197e95d7d948c64a7df8"
	Oct 10 18:20:58 old-k8s-version-141193 kubelet[724]: I1010 18:20:58.042111     724 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-g8lm9" podStartSLOduration=4.333954969 podCreationTimestamp="2025-10-10 18:20:48 +0000 UTC" firstStartedPulling="2025-10-10 18:20:49.305710675 +0000 UTC m=+16.467991000" lastFinishedPulling="2025-10-10 18:20:55.013794559 +0000 UTC m=+22.176074875" observedRunningTime="2025-10-10 18:20:56.033737534 +0000 UTC m=+23.196017866" watchObservedRunningTime="2025-10-10 18:20:58.042038844 +0000 UTC m=+25.204319175"
	Oct 10 18:20:59 old-k8s-version-141193 kubelet[724]: I1010 18:20:59.033762     724 scope.go:117] "RemoveContainer" containerID="6ebe203dda94ad2ffbefc3adcdc8edca63de95384ffb197e95d7d948c64a7df8"
	Oct 10 18:20:59 old-k8s-version-141193 kubelet[724]: I1010 18:20:59.034142     724 scope.go:117] "RemoveContainer" containerID="317b5fbf87bfe30a8ecf7698846a3261f5082a3c6dbc1013c34d952ea1f50734"
	Oct 10 18:20:59 old-k8s-version-141193 kubelet[724]: E1010 18:20:59.034560     724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-nsnjs_kubernetes-dashboard(91f3f2ac-4ba3-40e9-8173-386bdbdd8dae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nsnjs" podUID="91f3f2ac-4ba3-40e9-8173-386bdbdd8dae"
	Oct 10 18:21:00 old-k8s-version-141193 kubelet[724]: I1010 18:21:00.037934     724 scope.go:117] "RemoveContainer" containerID="317b5fbf87bfe30a8ecf7698846a3261f5082a3c6dbc1013c34d952ea1f50734"
	Oct 10 18:21:00 old-k8s-version-141193 kubelet[724]: E1010 18:21:00.039017     724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-nsnjs_kubernetes-dashboard(91f3f2ac-4ba3-40e9-8173-386bdbdd8dae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nsnjs" podUID="91f3f2ac-4ba3-40e9-8173-386bdbdd8dae"
	Oct 10 18:21:01 old-k8s-version-141193 kubelet[724]: I1010 18:21:01.039783     724 scope.go:117] "RemoveContainer" containerID="317b5fbf87bfe30a8ecf7698846a3261f5082a3c6dbc1013c34d952ea1f50734"
	Oct 10 18:21:01 old-k8s-version-141193 kubelet[724]: E1010 18:21:01.040216     724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-nsnjs_kubernetes-dashboard(91f3f2ac-4ba3-40e9-8173-386bdbdd8dae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nsnjs" podUID="91f3f2ac-4ba3-40e9-8173-386bdbdd8dae"
	Oct 10 18:21:08 old-k8s-version-141193 kubelet[724]: I1010 18:21:08.057898     724 scope.go:117] "RemoveContainer" containerID="194d18ca204baa8431464117f4490a32c01a38dcdc5e3a8e68285f79bd382765"
	Oct 10 18:21:14 old-k8s-version-141193 kubelet[724]: I1010 18:21:14.938848     724 scope.go:117] "RemoveContainer" containerID="317b5fbf87bfe30a8ecf7698846a3261f5082a3c6dbc1013c34d952ea1f50734"
	Oct 10 18:21:15 old-k8s-version-141193 kubelet[724]: I1010 18:21:15.075734     724 scope.go:117] "RemoveContainer" containerID="317b5fbf87bfe30a8ecf7698846a3261f5082a3c6dbc1013c34d952ea1f50734"
	Oct 10 18:21:15 old-k8s-version-141193 kubelet[724]: I1010 18:21:15.076046     724 scope.go:117] "RemoveContainer" containerID="1667847f042344bbbf08942c8b74a2e8385d5dfaf27c738cc310c23092d32a3d"
	Oct 10 18:21:15 old-k8s-version-141193 kubelet[724]: E1010 18:21:15.076454     724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-nsnjs_kubernetes-dashboard(91f3f2ac-4ba3-40e9-8173-386bdbdd8dae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nsnjs" podUID="91f3f2ac-4ba3-40e9-8173-386bdbdd8dae"
	Oct 10 18:21:19 old-k8s-version-141193 kubelet[724]: I1010 18:21:19.257336     724 scope.go:117] "RemoveContainer" containerID="1667847f042344bbbf08942c8b74a2e8385d5dfaf27c738cc310c23092d32a3d"
	Oct 10 18:21:19 old-k8s-version-141193 kubelet[724]: E1010 18:21:19.258133     724 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-nsnjs_kubernetes-dashboard(91f3f2ac-4ba3-40e9-8173-386bdbdd8dae)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-nsnjs" podUID="91f3f2ac-4ba3-40e9-8173-386bdbdd8dae"
	Oct 10 18:21:23 old-k8s-version-141193 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 10 18:21:23 old-k8s-version-141193 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 10 18:21:23 old-k8s-version-141193 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 10 18:21:23 old-k8s-version-141193 systemd[1]: kubelet.service: Consumed 1.503s CPU time.
	
	
	==> kubernetes-dashboard [7b7c62874a1a37307babd4ba819091e951bc357eb79ac3fa62cffe33dbb78e22] <==
	2025/10/10 18:20:55 Using namespace: kubernetes-dashboard
	2025/10/10 18:20:55 Using in-cluster config to connect to apiserver
	2025/10/10 18:20:55 Using secret token for csrf signing
	2025/10/10 18:20:55 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/10 18:20:55 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/10 18:20:55 Successful initial request to the apiserver, version: v1.28.0
	2025/10/10 18:20:55 Generating JWE encryption key
	2025/10/10 18:20:55 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/10 18:20:55 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/10 18:20:55 Initializing JWE encryption key from synchronized object
	2025/10/10 18:20:55 Creating in-cluster Sidecar client
	2025/10/10 18:20:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/10 18:20:55 Serving insecurely on HTTP port: 9090
	2025/10/10 18:21:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/10 18:20:55 Starting overwatch
	
	
	==> storage-provisioner [194d18ca204baa8431464117f4490a32c01a38dcdc5e3a8e68285f79bd382765] <==
	I1010 18:20:37.304104       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1010 18:21:07.306571       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [de2790671165db40044a25122f17100a12947ed8065bf6e2ed1ff37219e247dd] <==
	I1010 18:21:08.112562       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1010 18:21:08.121112       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1010 18:21:08.121187       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1010 18:21:25.519721       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1010 18:21:25.519871       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-141193_cf7d7c94-4b36-4fbd-a6c2-b666a5f185e5!
	I1010 18:21:25.519919       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"783e5569-4ec9-4de4-9b38-064b377c9a54", APIVersion:"v1", ResourceVersion:"615", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-141193_cf7d7c94-4b36-4fbd-a6c2-b666a5f185e5 became leader
	I1010 18:21:25.620150       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-141193_cf7d7c94-4b36-4fbd-a6c2-b666a5f185e5!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-141193 -n old-k8s-version-141193
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-141193 -n old-k8s-version-141193: exit status 2 (315.457974ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-141193 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-556024 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-556024 --alsologtostderr -v=1: exit status 80 (1.946590669s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-556024 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 18:21:44.009088  328826 out.go:360] Setting OutFile to fd 1 ...
	I1010 18:21:44.013433  328826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:21:44.013458  328826 out.go:374] Setting ErrFile to fd 2...
	I1010 18:21:44.013467  328826 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:21:44.013915  328826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 18:21:44.014346  328826 out.go:368] Setting JSON to false
	I1010 18:21:44.014406  328826 mustload.go:65] Loading cluster: no-preload-556024
	I1010 18:21:44.015234  328826 config.go:182] Loaded profile config "no-preload-556024": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:21:44.016317  328826 cli_runner.go:164] Run: docker container inspect no-preload-556024 --format={{.State.Status}}
	I1010 18:21:44.045631  328826 host.go:66] Checking if "no-preload-556024" exists ...
	I1010 18:21:44.046011  328826 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:21:44.145307  328826 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-10 18:21:44.131730826 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:21:44.146253  328826 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-556024 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1010 18:21:44.148514  328826 out.go:179] * Pausing node no-preload-556024 ... 
	I1010 18:21:44.150208  328826 host.go:66] Checking if "no-preload-556024" exists ...
	I1010 18:21:44.150595  328826 ssh_runner.go:195] Run: systemctl --version
	I1010 18:21:44.150663  328826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-556024
	I1010 18:21:44.175483  328826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/no-preload-556024/id_rsa Username:docker}
	I1010 18:21:44.292445  328826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:21:44.306738  328826 pause.go:52] kubelet running: true
	I1010 18:21:44.306814  328826 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1010 18:21:44.472616  328826 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1010 18:21:44.472708  328826 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1010 18:21:44.564827  328826 cri.go:89] found id: "881de1435156942e8e9fe01d8027baa3c3ac0ed457aa689f7625ac0b503df981"
	I1010 18:21:44.564854  328826 cri.go:89] found id: "1e3ad2e9d70e55e1c0f0706b095edba6bc813cc89953f666ce9c438a535fb038"
	I1010 18:21:44.564859  328826 cri.go:89] found id: "ded19ae952b01a25d91e7233536d7b2a7e1abc59c551437700353661b7888410"
	I1010 18:21:44.564863  328826 cri.go:89] found id: "7da7c710c0c97e371e285306921a08629d485643a3d7a010a63878a9e851b4ff"
	I1010 18:21:44.564867  328826 cri.go:89] found id: "58578d5735e6c09f8bee7a1bed1c2a6815baa58dec329a977a887f8e583cf301"
	I1010 18:21:44.564872  328826 cri.go:89] found id: "624948aa983f6a950a5a86e99ebbf4e3cec99b2849460ed697524b3fc4ffac05"
	I1010 18:21:44.564876  328826 cri.go:89] found id: "63abfddfe6fe2887c4901b8e265aae05ec3330bd42bd0d67e011b354a39c6023"
	I1010 18:21:44.564880  328826 cri.go:89] found id: "579953ecaa5c709ae190ac505c57c31de755d4d689b3be28199b4f18c038f574"
	I1010 18:21:44.564884  328826 cri.go:89] found id: "f690c75f2865bf33ee267a92d360114ddc8d677ee96e0e894aa2e4d900fd9adf"
	I1010 18:21:44.564892  328826 cri.go:89] found id: "2a18fa8993b6454a243ebedac42429e502364ba0ed77ebf8041dcadcd9e5da7a"
	I1010 18:21:44.564896  328826 cri.go:89] found id: "e0dd2d726bc067123461686a973a1bca5f3036eb38199d551b8302751e01c850"
	I1010 18:21:44.564900  328826 cri.go:89] found id: ""
	I1010 18:21:44.564943  328826 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 18:21:44.582377  328826 retry.go:31] will retry after 165.310977ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:21:44Z" level=error msg="open /run/runc: no such file or directory"
	I1010 18:21:44.748819  328826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:21:44.766264  328826 pause.go:52] kubelet running: false
	I1010 18:21:44.766431  328826 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1010 18:21:44.966721  328826 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1010 18:21:44.966856  328826 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1010 18:21:45.055135  328826 cri.go:89] found id: "881de1435156942e8e9fe01d8027baa3c3ac0ed457aa689f7625ac0b503df981"
	I1010 18:21:45.055163  328826 cri.go:89] found id: "1e3ad2e9d70e55e1c0f0706b095edba6bc813cc89953f666ce9c438a535fb038"
	I1010 18:21:45.055169  328826 cri.go:89] found id: "ded19ae952b01a25d91e7233536d7b2a7e1abc59c551437700353661b7888410"
	I1010 18:21:45.055175  328826 cri.go:89] found id: "7da7c710c0c97e371e285306921a08629d485643a3d7a010a63878a9e851b4ff"
	I1010 18:21:45.055178  328826 cri.go:89] found id: "58578d5735e6c09f8bee7a1bed1c2a6815baa58dec329a977a887f8e583cf301"
	I1010 18:21:45.055182  328826 cri.go:89] found id: "624948aa983f6a950a5a86e99ebbf4e3cec99b2849460ed697524b3fc4ffac05"
	I1010 18:21:45.055185  328826 cri.go:89] found id: "63abfddfe6fe2887c4901b8e265aae05ec3330bd42bd0d67e011b354a39c6023"
	I1010 18:21:45.055188  328826 cri.go:89] found id: "579953ecaa5c709ae190ac505c57c31de755d4d689b3be28199b4f18c038f574"
	I1010 18:21:45.055191  328826 cri.go:89] found id: "f690c75f2865bf33ee267a92d360114ddc8d677ee96e0e894aa2e4d900fd9adf"
	I1010 18:21:45.055205  328826 cri.go:89] found id: "2a18fa8993b6454a243ebedac42429e502364ba0ed77ebf8041dcadcd9e5da7a"
	I1010 18:21:45.055209  328826 cri.go:89] found id: "e0dd2d726bc067123461686a973a1bca5f3036eb38199d551b8302751e01c850"
	I1010 18:21:45.055213  328826 cri.go:89] found id: ""
	I1010 18:21:45.055261  328826 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 18:21:45.069105  328826 retry.go:31] will retry after 495.261385ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:21:45Z" level=error msg="open /run/runc: no such file or directory"
	I1010 18:21:45.564778  328826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:21:45.583070  328826 pause.go:52] kubelet running: false
	I1010 18:21:45.583130  328826 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1010 18:21:45.780527  328826 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1010 18:21:45.780627  328826 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1010 18:21:45.862952  328826 cri.go:89] found id: "881de1435156942e8e9fe01d8027baa3c3ac0ed457aa689f7625ac0b503df981"
	I1010 18:21:45.862992  328826 cri.go:89] found id: "1e3ad2e9d70e55e1c0f0706b095edba6bc813cc89953f666ce9c438a535fb038"
	I1010 18:21:45.863000  328826 cri.go:89] found id: "ded19ae952b01a25d91e7233536d7b2a7e1abc59c551437700353661b7888410"
	I1010 18:21:45.863005  328826 cri.go:89] found id: "7da7c710c0c97e371e285306921a08629d485643a3d7a010a63878a9e851b4ff"
	I1010 18:21:45.863010  328826 cri.go:89] found id: "58578d5735e6c09f8bee7a1bed1c2a6815baa58dec329a977a887f8e583cf301"
	I1010 18:21:45.863018  328826 cri.go:89] found id: "624948aa983f6a950a5a86e99ebbf4e3cec99b2849460ed697524b3fc4ffac05"
	I1010 18:21:45.863022  328826 cri.go:89] found id: "63abfddfe6fe2887c4901b8e265aae05ec3330bd42bd0d67e011b354a39c6023"
	I1010 18:21:45.863027  328826 cri.go:89] found id: "579953ecaa5c709ae190ac505c57c31de755d4d689b3be28199b4f18c038f574"
	I1010 18:21:45.863031  328826 cri.go:89] found id: "f690c75f2865bf33ee267a92d360114ddc8d677ee96e0e894aa2e4d900fd9adf"
	I1010 18:21:45.863096  328826 cri.go:89] found id: "2a18fa8993b6454a243ebedac42429e502364ba0ed77ebf8041dcadcd9e5da7a"
	I1010 18:21:45.863107  328826 cri.go:89] found id: "e0dd2d726bc067123461686a973a1bca5f3036eb38199d551b8302751e01c850"
	I1010 18:21:45.863112  328826 cri.go:89] found id: ""
	I1010 18:21:45.863167  328826 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 18:21:45.880393  328826 out.go:203] 
	W1010 18:21:45.881663  328826 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:21:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:21:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1010 18:21:45.881686  328826 out.go:285] * 
	* 
	W1010 18:21:45.886625  328826 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 18:21:45.887562  328826 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-556024 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-556024
helpers_test.go:243: (dbg) docker inspect no-preload-556024:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6784c6613c75278df31ee5b4585740e0799e214fb638665fee04ee1f04ba890d",
	        "Created": "2025-10-10T18:19:17.136910644Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 316220,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-10T18:20:43.749189526Z",
	            "FinishedAt": "2025-10-10T18:20:42.849961702Z"
	        },
	        "Image": "sha256:84da1fc78d37190122f56c520913b0bfc454516bc5fdbdc209e2a5258afce8c3",
	        "ResolvConfPath": "/var/lib/docker/containers/6784c6613c75278df31ee5b4585740e0799e214fb638665fee04ee1f04ba890d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6784c6613c75278df31ee5b4585740e0799e214fb638665fee04ee1f04ba890d/hostname",
	        "HostsPath": "/var/lib/docker/containers/6784c6613c75278df31ee5b4585740e0799e214fb638665fee04ee1f04ba890d/hosts",
	        "LogPath": "/var/lib/docker/containers/6784c6613c75278df31ee5b4585740e0799e214fb638665fee04ee1f04ba890d/6784c6613c75278df31ee5b4585740e0799e214fb638665fee04ee1f04ba890d-json.log",
	        "Name": "/no-preload-556024",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-556024:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-556024",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6784c6613c75278df31ee5b4585740e0799e214fb638665fee04ee1f04ba890d",
	                "LowerDir": "/var/lib/docker/overlay2/0169bc262a39812abb813c6e4211d609913a24a3210914aa4b5ed073144773c7-init/diff:/var/lib/docker/overlay2/9995a0af7efc4d83e8e62526a6cf13ffc5df3bab5cee59077c863040f7e3e58d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0169bc262a39812abb813c6e4211d609913a24a3210914aa4b5ed073144773c7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0169bc262a39812abb813c6e4211d609913a24a3210914aa4b5ed073144773c7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0169bc262a39812abb813c6e4211d609913a24a3210914aa4b5ed073144773c7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-556024",
	                "Source": "/var/lib/docker/volumes/no-preload-556024/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-556024",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-556024",
	                "name.minikube.sigs.k8s.io": "no-preload-556024",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9ad80de52da0d84be3cdbb960e93146da42bd59d7fe2697ea533defed1406d68",
	            "SandboxKey": "/var/run/docker/netns/9ad80de52da0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-556024": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:d7:f4:d3:25:e1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "62177a68d9eb1c876ff604502e8d1e7d060441f560a7646d94ff4c9f62d14c4b",
	                    "EndpointID": "8ec5baf412200f5e3f904447918f976c0a0d33b68751fc7ec85d2e02e9f946f5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-556024",
	                        "6784c6613c75"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-556024 -n no-preload-556024
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-556024 -n no-preload-556024: exit status 2 (374.334267ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-556024 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-556024 logs -n 25: (1.293745667s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-078032 sudo crio config                                                                                                                                                                                                             │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ delete  │ -p bridge-078032                                                                                                                                                                                                                              │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-141193 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p old-k8s-version-141193 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:21 UTC │
	│ stop    │ -p embed-certs-472518 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ addons  │ enable metrics-server -p no-preload-556024 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ delete  │ -p disable-driver-mounts-523797                                                                                                                                                                                                               │ disable-driver-mounts-523797 │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p default-k8s-diff-port-821769 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:21 UTC │
	│ stop    │ -p no-preload-556024 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ addons  │ enable dashboard -p embed-certs-472518 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p embed-certs-472518 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:21 UTC │
	│ addons  │ enable dashboard -p no-preload-556024 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p no-preload-556024 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:21 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-821769 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-821769 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ image   │ old-k8s-version-141193 image list --format=json                                                                                                                                                                                               │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ pause   │ -p old-k8s-version-141193 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ delete  │ -p old-k8s-version-141193                                                                                                                                                                                                                     │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ delete  │ -p old-k8s-version-141193                                                                                                                                                                                                                     │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ start   │ -p newest-cni-121129 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-821769 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ start   │ -p default-k8s-diff-port-821769 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ image   │ no-preload-556024 image list --format=json                                                                                                                                                                                                    │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ pause   │ -p no-preload-556024 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ image   │ embed-certs-472518 image list --format=json                                                                                                                                                                                                   │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/10 18:21:36
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 18:21:36.443972  325699 out.go:360] Setting OutFile to fd 1 ...
	I1010 18:21:36.444232  325699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:21:36.444242  325699 out.go:374] Setting ErrFile to fd 2...
	I1010 18:21:36.444246  325699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:21:36.444423  325699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 18:21:36.444868  325699 out.go:368] Setting JSON to false
	I1010 18:21:36.445989  325699 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3836,"bootTime":1760116660,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 18:21:36.446111  325699 start.go:141] virtualization: kvm guest
	I1010 18:21:36.447655  325699 out.go:179] * [default-k8s-diff-port-821769] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1010 18:21:36.451745  325699 out.go:179]   - MINIKUBE_LOCATION=21724
	I1010 18:21:36.451794  325699 notify.go:220] Checking for updates...
	I1010 18:21:36.453782  325699 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 18:21:36.454903  325699 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:21:36.456168  325699 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-5815/.minikube
	I1010 18:21:36.457303  325699 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 18:21:36.458541  325699 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 18:21:36.460107  325699 config.go:182] Loaded profile config "default-k8s-diff-port-821769": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:21:36.460644  325699 driver.go:421] Setting default libvirt URI to qemu:///system
	I1010 18:21:36.487553  325699 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1010 18:21:36.487706  325699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:21:36.548644  325699 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:82 SystemTime:2025-10-10 18:21:36.539560881 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:21:36.548787  325699 docker.go:318] overlay module found
	I1010 18:21:36.550878  325699 out.go:179] * Using the docker driver based on existing profile
	I1010 18:21:31.750233  324649 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1010 18:21:31.750529  324649 start.go:159] libmachine.API.Create for "newest-cni-121129" (driver="docker")
	I1010 18:21:31.750565  324649 client.go:168] LocalClient.Create starting
	I1010 18:21:31.750670  324649 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem
	I1010 18:21:31.750723  324649 main.go:141] libmachine: Decoding PEM data...
	I1010 18:21:31.750746  324649 main.go:141] libmachine: Parsing certificate...
	I1010 18:21:31.750822  324649 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem
	I1010 18:21:31.750849  324649 main.go:141] libmachine: Decoding PEM data...
	I1010 18:21:31.750864  324649 main.go:141] libmachine: Parsing certificate...
	I1010 18:21:31.751250  324649 cli_runner.go:164] Run: docker network inspect newest-cni-121129 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1010 18:21:31.769180  324649 cli_runner.go:211] docker network inspect newest-cni-121129 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1010 18:21:31.769299  324649 network_create.go:284] running [docker network inspect newest-cni-121129] to gather additional debugging logs...
	I1010 18:21:31.769325  324649 cli_runner.go:164] Run: docker network inspect newest-cni-121129
	W1010 18:21:31.785789  324649 cli_runner.go:211] docker network inspect newest-cni-121129 returned with exit code 1
	I1010 18:21:31.785839  324649 network_create.go:287] error running [docker network inspect newest-cni-121129]: docker network inspect newest-cni-121129: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-121129 not found
	I1010 18:21:31.785860  324649 network_create.go:289] output of [docker network inspect newest-cni-121129]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-121129 not found
	
	** /stderr **
	I1010 18:21:31.785985  324649 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 18:21:31.803517  324649 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3f8fb0c8a54c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1a:51:a2:ab:ca:d6} reservation:<nil>}
	I1010 18:21:31.804204  324649 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bdbbffbd65c1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:de:11:33:77:48:20} reservation:<nil>}
	I1010 18:21:31.804907  324649 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0b6a5dab2001 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:93:a5:d3:c3:8f} reservation:<nil>}
	I1010 18:21:31.805493  324649 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-62177a68d9eb IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5e:70:f2:a2:da:00} reservation:<nil>}
	I1010 18:21:31.806333  324649 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f75590}
	I1010 18:21:31.806360  324649 network_create.go:124] attempt to create docker network newest-cni-121129 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1010 18:21:31.806398  324649 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-121129 newest-cni-121129
	I1010 18:21:31.865994  324649 network_create.go:108] docker network newest-cni-121129 192.168.85.0/24 created
	I1010 18:21:31.866029  324649 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-121129" container
	I1010 18:21:31.866140  324649 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1010 18:21:31.883599  324649 cli_runner.go:164] Run: docker volume create newest-cni-121129 --label name.minikube.sigs.k8s.io=newest-cni-121129 --label created_by.minikube.sigs.k8s.io=true
	I1010 18:21:31.901755  324649 oci.go:103] Successfully created a docker volume newest-cni-121129
	I1010 18:21:31.901834  324649 cli_runner.go:164] Run: docker run --rm --name newest-cni-121129-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-121129 --entrypoint /usr/bin/test -v newest-cni-121129:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 -d /var/lib
	I1010 18:21:32.316917  324649 oci.go:107] Successfully prepared a docker volume newest-cni-121129
	I1010 18:21:32.316960  324649 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:21:32.316979  324649 kic.go:194] Starting extracting preloaded images to volume ...
	I1010 18:21:32.317041  324649 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-121129:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1010 18:21:36.215225  324649 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-121129:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 -I lz4 -xf /preloaded.tar -C /extractDir: (3.898129423s)
	I1010 18:21:36.215274  324649 kic.go:203] duration metric: took 3.898290657s to extract preloaded images to volume ...
	W1010 18:21:36.215394  324649 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1010 18:21:36.215437  324649 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1010 18:21:36.215483  324649 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1010 18:21:36.276319  324649 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-121129 --name newest-cni-121129 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-121129 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-121129 --network newest-cni-121129 --ip 192.168.85.2 --volume newest-cni-121129:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6
	I1010 18:21:36.552156  325699 start.go:305] selected driver: docker
	I1010 18:21:36.552182  325699 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-821769 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-821769 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:21:36.552263  325699 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 18:21:36.552888  325699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:21:36.619123  325699 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-10 18:21:36.608354336 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:21:36.619511  325699 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:21:36.619549  325699 cni.go:84] Creating CNI manager for ""
	I1010 18:21:36.619602  325699 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:21:36.619655  325699 start.go:349] cluster config:
	{Name:default-k8s-diff-port-821769 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-821769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:21:36.621174  325699 out.go:179] * Starting "default-k8s-diff-port-821769" primary control-plane node in "default-k8s-diff-port-821769" cluster
	I1010 18:21:36.623163  325699 cache.go:123] Beginning downloading kic base image for docker with crio
	I1010 18:21:36.624439  325699 out.go:179] * Pulling base image v0.0.48-1760103811-21724 ...
	I1010 18:21:36.625488  325699 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:21:36.625524  325699 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1010 18:21:36.625536  325699 cache.go:58] Caching tarball of preloaded images
	I1010 18:21:36.625602  325699 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon
	I1010 18:21:36.625620  325699 preload.go:233] Found /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 18:21:36.625631  325699 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1010 18:21:36.625748  325699 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/config.json ...
	I1010 18:21:36.646734  325699 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon, skipping pull
	I1010 18:21:36.646759  325699 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 exists in daemon, skipping load
	I1010 18:21:36.646779  325699 cache.go:232] Successfully downloaded all kic artifacts
	I1010 18:21:36.646809  325699 start.go:360] acquireMachinesLock for default-k8s-diff-port-821769: {Name:mk32364aa6b9096e7aa0195f0d450a3e04b4f6f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:21:36.646879  325699 start.go:364] duration metric: took 45.359µs to acquireMachinesLock for "default-k8s-diff-port-821769"
	I1010 18:21:36.646912  325699 start.go:96] Skipping create...Using existing machine configuration
	I1010 18:21:36.646922  325699 fix.go:54] fixHost starting: 
	I1010 18:21:36.647229  325699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:21:36.665115  325699 fix.go:112] recreateIfNeeded on default-k8s-diff-port-821769: state=Stopped err=<nil>
	W1010 18:21:36.665142  325699 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 18:21:36.566005  324649 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Running}}
	I1010 18:21:36.587637  324649 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:21:36.609439  324649 cli_runner.go:164] Run: docker exec newest-cni-121129 stat /var/lib/dpkg/alternatives/iptables
	I1010 18:21:36.654885  324649 oci.go:144] the created container "newest-cni-121129" has a running status.
	I1010 18:21:36.654911  324649 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa...
	I1010 18:21:37.150404  324649 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1010 18:21:37.181411  324649 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:21:37.202450  324649 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1010 18:21:37.202483  324649 kic_runner.go:114] Args: [docker exec --privileged newest-cni-121129 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1010 18:21:37.249728  324649 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:21:37.274026  324649 machine.go:93] provisionDockerMachine start ...
	I1010 18:21:37.274139  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:37.295767  324649 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:37.296119  324649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1010 18:21:37.296140  324649 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 18:21:37.433206  324649 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-121129
	
	I1010 18:21:37.433232  324649 ubuntu.go:182] provisioning hostname "newest-cni-121129"
	I1010 18:21:37.433293  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:37.451228  324649 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:37.451497  324649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1010 18:21:37.451516  324649 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-121129 && echo "newest-cni-121129" | sudo tee /etc/hostname
	I1010 18:21:37.593295  324649 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-121129
	
	I1010 18:21:37.593411  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:37.611384  324649 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:37.611592  324649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1010 18:21:37.611611  324649 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-121129' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-121129/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-121129' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:21:37.744646  324649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:21:37.744678  324649 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-5815/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-5815/.minikube}
	I1010 18:21:37.744702  324649 ubuntu.go:190] setting up certificates
	I1010 18:21:37.744714  324649 provision.go:84] configureAuth start
	I1010 18:21:37.744775  324649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-121129
	I1010 18:21:37.762585  324649 provision.go:143] copyHostCerts
	I1010 18:21:37.762636  324649 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem, removing ...
	I1010 18:21:37.762644  324649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem
	I1010 18:21:37.762711  324649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem (1082 bytes)
	I1010 18:21:37.762804  324649 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem, removing ...
	I1010 18:21:37.762812  324649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem
	I1010 18:21:37.762837  324649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem (1123 bytes)
	I1010 18:21:37.762889  324649 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem, removing ...
	I1010 18:21:37.762896  324649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem
	I1010 18:21:37.762918  324649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem (1675 bytes)
	I1010 18:21:37.762968  324649 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem org=jenkins.newest-cni-121129 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-121129]
	I1010 18:21:38.017732  324649 provision.go:177] copyRemoteCerts
	I1010 18:21:38.017792  324649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:21:38.017828  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:38.035754  324649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:21:38.135582  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1010 18:21:38.158372  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1010 18:21:38.177887  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 18:21:38.197335  324649 provision.go:87] duration metric: took 452.609625ms to configureAuth
	I1010 18:21:38.197361  324649 ubuntu.go:206] setting minikube options for container-runtime
	I1010 18:21:38.197520  324649 config.go:182] Loaded profile config "newest-cni-121129": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:21:38.197616  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:38.215693  324649 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:38.215929  324649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1010 18:21:38.215945  324649 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:21:38.487590  324649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:21:38.487615  324649 machine.go:96] duration metric: took 1.213566349s to provisionDockerMachine
	I1010 18:21:38.487627  324649 client.go:171] duration metric: took 6.737054602s to LocalClient.Create
	I1010 18:21:38.487644  324649 start.go:167] duration metric: took 6.737116946s to libmachine.API.Create "newest-cni-121129"
	I1010 18:21:38.487653  324649 start.go:293] postStartSetup for "newest-cni-121129" (driver="docker")
	I1010 18:21:38.487667  324649 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:21:38.487718  324649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:21:38.487755  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:38.505301  324649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:21:38.604755  324649 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:21:38.608251  324649 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1010 18:21:38.608275  324649 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1010 18:21:38.608284  324649 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/addons for local assets ...
	I1010 18:21:38.608338  324649 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/files for local assets ...
	I1010 18:21:38.608407  324649 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem -> 93542.pem in /etc/ssl/certs
	I1010 18:21:38.608505  324649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:21:38.617071  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:21:38.639238  324649 start.go:296] duration metric: took 151.569017ms for postStartSetup
	I1010 18:21:38.639632  324649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-121129
	I1010 18:21:38.658650  324649 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/config.json ...
	I1010 18:21:38.658910  324649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1010 18:21:38.658972  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:38.676393  324649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:21:38.770086  324649 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1010 18:21:38.774771  324649 start.go:128] duration metric: took 7.026418609s to createHost
	I1010 18:21:38.774799  324649 start.go:83] releasing machines lock for "newest-cni-121129", held for 7.026572954s
	I1010 18:21:38.774867  324649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-121129
	I1010 18:21:38.794249  324649 ssh_runner.go:195] Run: cat /version.json
	I1010 18:21:38.794292  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:38.794343  324649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:21:38.794395  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:38.812781  324649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:21:38.813044  324649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:21:38.964620  324649 ssh_runner.go:195] Run: systemctl --version
	I1010 18:21:38.971493  324649 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:21:39.008047  324649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:21:39.012702  324649 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:21:39.012768  324649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:21:39.043167  324649 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 18:21:39.043195  324649 start.go:495] detecting cgroup driver to use...
	I1010 18:21:39.043236  324649 detect.go:190] detected "systemd" cgroup driver on host os
	I1010 18:21:39.043275  324649 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:21:39.060424  324649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:21:39.073422  324649 docker.go:218] disabling cri-docker service (if available) ...
	I1010 18:21:39.073477  324649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:21:39.090113  324649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:21:39.108184  324649 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:21:39.193075  324649 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:21:39.284238  324649 docker.go:234] disabling docker service ...
	I1010 18:21:39.284295  324649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:21:39.303174  324649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:21:39.316224  324649 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:21:39.401593  324649 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:21:39.486478  324649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:21:39.499671  324649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:21:39.515336  324649 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1010 18:21:39.515393  324649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.526705  324649 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1010 18:21:39.526768  324649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.536968  324649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.546772  324649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.556927  324649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:21:39.566265  324649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.576240  324649 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.591514  324649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.601231  324649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:21:39.609546  324649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:21:39.617339  324649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:21:39.697520  324649 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:21:39.833447  324649 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:21:39.833510  324649 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:21:39.837650  324649 start.go:563] Will wait 60s for crictl version
	I1010 18:21:39.837706  324649 ssh_runner.go:195] Run: which crictl
	I1010 18:21:39.841778  324649 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1010 18:21:39.866403  324649 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1010 18:21:39.866489  324649 ssh_runner.go:195] Run: crio --version
	I1010 18:21:39.894594  324649 ssh_runner.go:195] Run: crio --version
	I1010 18:21:39.923363  324649 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1010 18:21:39.924491  324649 cli_runner.go:164] Run: docker network inspect newest-cni-121129 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 18:21:39.942921  324649 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1010 18:21:39.947042  324649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:21:39.959308  324649 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1010 18:21:36.669200  325699 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-821769" ...
	I1010 18:21:36.669266  325699 cli_runner.go:164] Run: docker start default-k8s-diff-port-821769
	I1010 18:21:36.950209  325699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:21:36.973712  325699 kic.go:430] container "default-k8s-diff-port-821769" state is running.
	I1010 18:21:36.974205  325699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-821769
	I1010 18:21:36.999384  325699 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/config.json ...
	I1010 18:21:36.999678  325699 machine.go:93] provisionDockerMachine start ...
	I1010 18:21:36.999832  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:37.025140  325699 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:37.025476  325699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1010 18:21:37.025494  325699 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 18:21:37.026335  325699 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37242->127.0.0.1:33128: read: connection reset by peer
	I1010 18:21:40.162873  325699 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-821769
	
	I1010 18:21:40.162901  325699 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-821769"
	I1010 18:21:40.162999  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:40.189150  325699 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:40.189443  325699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1010 18:21:40.189466  325699 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-821769 && echo "default-k8s-diff-port-821769" | sudo tee /etc/hostname
	I1010 18:21:40.331478  325699 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-821769
	
	I1010 18:21:40.331570  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:40.349460  325699 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:40.349752  325699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1010 18:21:40.349789  325699 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-821769' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-821769/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-821769' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:21:40.495960  325699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:21:40.495988  325699 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-5815/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-5815/.minikube}
	I1010 18:21:40.496005  325699 ubuntu.go:190] setting up certificates
	I1010 18:21:40.496013  325699 provision.go:84] configureAuth start
	I1010 18:21:40.496106  325699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-821769
	I1010 18:21:40.515849  325699 provision.go:143] copyHostCerts
	I1010 18:21:40.515918  325699 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem, removing ...
	I1010 18:21:40.515937  325699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem
	I1010 18:21:40.516030  325699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem (1123 bytes)
	I1010 18:21:40.516170  325699 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem, removing ...
	I1010 18:21:40.516190  325699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem
	I1010 18:21:40.516240  325699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem (1675 bytes)
	I1010 18:21:40.516317  325699 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem, removing ...
	I1010 18:21:40.516328  325699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem
	I1010 18:21:40.516365  325699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem (1082 bytes)
	I1010 18:21:40.516437  325699 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-821769 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-821769 localhost minikube]
	I1010 18:21:40.621000  325699 provision.go:177] copyRemoteCerts
	I1010 18:21:40.621136  325699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:21:40.621199  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:40.639539  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:40.738484  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1010 18:21:40.758076  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1010 18:21:40.777450  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 18:21:40.796411  325699 provision.go:87] duration metric: took 300.38696ms to configureAuth
	I1010 18:21:40.796439  325699 ubuntu.go:206] setting minikube options for container-runtime
	I1010 18:21:40.796606  325699 config.go:182] Loaded profile config "default-k8s-diff-port-821769": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:21:40.796693  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:40.814633  325699 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:40.814851  325699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1010 18:21:40.814874  325699 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:21:41.126788  325699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:21:41.126818  325699 machine.go:96] duration metric: took 4.127117296s to provisionDockerMachine
	I1010 18:21:41.126831  325699 start.go:293] postStartSetup for "default-k8s-diff-port-821769" (driver="docker")
	I1010 18:21:41.126845  325699 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:21:41.126909  325699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:21:41.126956  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:41.146094  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:41.244401  325699 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:21:41.247953  325699 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1010 18:21:41.247984  325699 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1010 18:21:41.247996  325699 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/addons for local assets ...
	I1010 18:21:41.248060  325699 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/files for local assets ...
	I1010 18:21:41.248175  325699 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem -> 93542.pem in /etc/ssl/certs
	I1010 18:21:41.248266  325699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:21:41.256669  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:21:41.275845  325699 start.go:296] duration metric: took 149.001179ms for postStartSetup
	I1010 18:21:41.275913  325699 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1010 18:21:41.275950  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:41.294158  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:41.387292  325699 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1010 18:21:41.391952  325699 fix.go:56] duration metric: took 4.745025215s for fixHost
	I1010 18:21:41.391980  325699 start.go:83] releasing machines lock for "default-k8s-diff-port-821769", held for 4.745085816s
	I1010 18:21:41.392032  325699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-821769
	I1010 18:21:41.410356  325699 ssh_runner.go:195] Run: cat /version.json
	I1010 18:21:41.410400  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:41.410462  325699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:21:41.410537  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:41.428673  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:41.429174  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:39.960290  324649 kubeadm.go:883] updating cluster {Name:newest-cni-121129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 18:21:39.960390  324649 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:21:39.960442  324649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:21:39.991643  324649 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:21:39.991664  324649 crio.go:433] Images already preloaded, skipping extraction
	I1010 18:21:39.991716  324649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:21:40.018213  324649 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:21:40.018233  324649 cache_images.go:85] Images are preloaded, skipping loading
	I1010 18:21:40.018240  324649 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1010 18:21:40.018331  324649 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-121129 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:21:40.018427  324649 ssh_runner.go:195] Run: crio config
	I1010 18:21:40.065330  324649 cni.go:84] Creating CNI manager for ""
	I1010 18:21:40.065358  324649 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:21:40.065375  324649 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1010 18:21:40.065395  324649 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-121129 NodeName:newest-cni-121129 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 18:21:40.065508  324649 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-121129"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 18:21:40.065561  324649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1010 18:21:40.074911  324649 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 18:21:40.074973  324649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 18:21:40.083566  324649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1010 18:21:40.097986  324649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:21:40.114282  324649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1010 18:21:40.128847  324649 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1010 18:21:40.132698  324649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:21:40.143413  324649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:21:40.227094  324649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:21:40.249628  324649 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129 for IP: 192.168.85.2
	I1010 18:21:40.249652  324649 certs.go:195] generating shared ca certs ...
	I1010 18:21:40.249678  324649 certs.go:227] acquiring lock for ca certs: {Name:mkd2ebf34e0d6ec3a7809bed8325fdc7fe2fcc31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:40.249833  324649 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key
	I1010 18:21:40.249870  324649 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key
	I1010 18:21:40.249880  324649 certs.go:257] generating profile certs ...
	I1010 18:21:40.249964  324649 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.key
	I1010 18:21:40.249986  324649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.crt with IP's: []
	I1010 18:21:40.601463  324649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.crt ...
	I1010 18:21:40.601490  324649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.crt: {Name:mk644ed6d675dd6a538c02d2c8e614b2a15b3122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:40.601663  324649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.key ...
	I1010 18:21:40.601672  324649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.key: {Name:mk914b6f6ffa18eaa800e7d301f088828f088f03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:40.601751  324649 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key.89f266b7
	I1010 18:21:40.601767  324649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.crt.89f266b7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1010 18:21:41.352224  324649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.crt.89f266b7 ...
	I1010 18:21:41.352248  324649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.crt.89f266b7: {Name:mkdef5060ad4b077648f6c85a78fa3bbbb5e73d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:41.352404  324649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key.89f266b7 ...
	I1010 18:21:41.352424  324649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key.89f266b7: {Name:mkfea0f84cddcdc4e3c69624946502bcf937c477 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:41.352501  324649 certs.go:382] copying /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.crt.89f266b7 -> /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.crt
	I1010 18:21:41.352570  324649 certs.go:386] copying /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key.89f266b7 -> /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key
	I1010 18:21:41.352640  324649 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.key
	I1010 18:21:41.352657  324649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.crt with IP's: []
	I1010 18:21:41.590793  325699 ssh_runner.go:195] Run: systemctl --version
	I1010 18:21:41.597352  325699 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:21:41.632391  325699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:21:41.637267  325699 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:21:41.637329  325699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:21:41.646619  325699 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1010 18:21:41.646643  325699 start.go:495] detecting cgroup driver to use...
	I1010 18:21:41.646672  325699 detect.go:190] detected "systemd" cgroup driver on host os
	I1010 18:21:41.646707  325699 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:21:41.662702  325699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:21:41.675945  325699 docker.go:218] disabling cri-docker service (if available) ...
	I1010 18:21:41.675998  325699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:21:41.690577  325699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:21:41.703139  325699 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:21:41.785080  325699 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:21:41.887442  325699 docker.go:234] disabling docker service ...
	I1010 18:21:41.887510  325699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:21:41.902511  325699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:21:41.915792  325699 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:21:41.998153  325699 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:21:42.082320  325699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:21:42.095388  325699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:21:42.110606  325699 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1010 18:21:42.110668  325699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.120566  325699 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1010 18:21:42.120611  325699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.130445  325699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.140220  325699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.149997  325699 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:21:42.159172  325699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.168739  325699 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.177930  325699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.187922  325699 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:21:42.196256  325699 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:21:42.204604  325699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:21:42.288532  325699 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:21:42.425073  325699 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:21:42.425143  325699 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:21:42.429651  325699 start.go:563] Will wait 60s for crictl version
	I1010 18:21:42.429707  325699 ssh_runner.go:195] Run: which crictl
	I1010 18:21:42.433310  325699 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1010 18:21:42.459422  325699 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1010 18:21:42.459511  325699 ssh_runner.go:195] Run: crio --version
	I1010 18:21:42.491064  325699 ssh_runner.go:195] Run: crio --version
	I1010 18:21:42.523177  325699 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1010 18:21:42.524273  325699 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-821769 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 18:21:42.544600  325699 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1010 18:21:42.549336  325699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:21:42.561250  325699 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-821769 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-821769 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 18:21:42.561363  325699 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:21:42.561407  325699 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:21:42.595069  325699 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:21:42.595092  325699 crio.go:433] Images already preloaded, skipping extraction
	I1010 18:21:42.595137  325699 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:21:42.621683  325699 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:21:42.621708  325699 cache_images.go:85] Images are preloaded, skipping loading
	I1010 18:21:42.621718  325699 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.34.1 crio true true} ...
	I1010 18:21:42.621877  325699 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-821769 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-821769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:21:42.621955  325699 ssh_runner.go:195] Run: crio config
	I1010 18:21:42.670696  325699 cni.go:84] Creating CNI manager for ""
	I1010 18:21:42.670714  325699 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:21:42.670729  325699 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1010 18:21:42.670749  325699 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-821769 NodeName:default-k8s-diff-port-821769 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 18:21:42.670867  325699 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-821769"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 18:21:42.670920  325699 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1010 18:21:42.679913  325699 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 18:21:42.679968  325699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 18:21:42.688618  325699 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1010 18:21:42.703331  325699 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:21:42.718311  325699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1010 18:21:42.732968  325699 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1010 18:21:42.736868  325699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:21:42.747553  325699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:21:42.829086  325699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:21:42.858574  325699 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769 for IP: 192.168.103.2
	I1010 18:21:42.858598  325699 certs.go:195] generating shared ca certs ...
	I1010 18:21:42.858623  325699 certs.go:227] acquiring lock for ca certs: {Name:mkd2ebf34e0d6ec3a7809bed8325fdc7fe2fcc31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:42.858780  325699 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key
	I1010 18:21:42.858834  325699 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key
	I1010 18:21:42.858849  325699 certs.go:257] generating profile certs ...
	I1010 18:21:42.858967  325699 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/client.key
	I1010 18:21:42.859085  325699 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/apiserver.key.10168654
	I1010 18:21:42.859140  325699 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/proxy-client.key
	I1010 18:21:42.859285  325699 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem (1338 bytes)
	W1010 18:21:42.859321  325699 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354_empty.pem, impossibly tiny 0 bytes
	I1010 18:21:42.859336  325699 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem (1675 bytes)
	I1010 18:21:42.859370  325699 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem (1082 bytes)
	I1010 18:21:42.859399  325699 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:21:42.859429  325699 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem (1675 bytes)
	I1010 18:21:42.859481  325699 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:21:42.860204  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:21:42.882094  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:21:42.903468  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:21:42.925737  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1010 18:21:42.953372  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1010 18:21:42.973504  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 18:21:42.992899  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:21:43.011728  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 18:21:43.030624  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem --> /usr/share/ca-certificates/9354.pem (1338 bytes)
	I1010 18:21:43.049802  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /usr/share/ca-certificates/93542.pem (1708 bytes)
	I1010 18:21:43.070120  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:21:43.090039  325699 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 18:21:43.103785  325699 ssh_runner.go:195] Run: openssl version
	I1010 18:21:43.110111  325699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93542.pem && ln -fs /usr/share/ca-certificates/93542.pem /etc/ssl/certs/93542.pem"
	I1010 18:21:43.118950  325699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93542.pem
	I1010 18:21:43.122454  325699 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 17:36 /usr/share/ca-certificates/93542.pem
	I1010 18:21:43.122512  325699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93542.pem
	I1010 18:21:43.157901  325699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93542.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:21:43.167111  325699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:21:43.176248  325699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:21:43.179836  325699 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:21:43.179900  325699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:21:43.216894  325699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:21:43.226252  325699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9354.pem && ln -fs /usr/share/ca-certificates/9354.pem /etc/ssl/certs/9354.pem"
	I1010 18:21:43.235390  325699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9354.pem
	I1010 18:21:43.239321  325699 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 17:36 /usr/share/ca-certificates/9354.pem
	I1010 18:21:43.239380  325699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9354.pem
	I1010 18:21:43.273487  325699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9354.pem /etc/ssl/certs/51391683.0"
	I1010 18:21:43.282570  325699 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:21:43.286433  325699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 18:21:43.320357  325699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 18:21:43.361223  325699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 18:21:43.409478  325699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 18:21:43.456529  325699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 18:21:43.512033  325699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 18:21:43.568244  325699 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-821769 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-821769 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:21:43.568348  325699 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 18:21:43.568440  325699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 18:21:43.611528  325699 cri.go:89] found id: "1352ca41b0e7626fbf6ee43638506dfab18bd157572e9128f411ac1c5ae54538"
	I1010 18:21:43.611555  325699 cri.go:89] found id: "2aeadcb9e03cc805af5eff4f1b521299f31e4d618387d10eef543b4e95787f70"
	I1010 18:21:43.611560  325699 cri.go:89] found id: "6c6e229b2a8311cf4d60aad6c602e02c2923b5ba2309e536076e40579456e8e2"
	I1010 18:21:43.611565  325699 cri.go:89] found id: "c3f03c923ad6830325d9888fdf2ad9de25ac73298e25b5812f72951d65af2eec"
	I1010 18:21:43.611569  325699 cri.go:89] found id: ""
	I1010 18:21:43.611612  325699 ssh_runner.go:195] Run: sudo runc list -f json
	W1010 18:21:43.627173  325699 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:21:43Z" level=error msg="open /run/runc: no such file or directory"
	I1010 18:21:43.627256  325699 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 18:21:43.638581  325699 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1010 18:21:43.638602  325699 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1010 18:21:43.638652  325699 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 18:21:43.650423  325699 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 18:21:43.651568  325699 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-821769" does not appear in /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:21:43.652341  325699 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-5815/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-821769" cluster setting kubeconfig missing "default-k8s-diff-port-821769" context setting]
	I1010 18:21:43.653567  325699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:43.655682  325699 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 18:21:43.667709  325699 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1010 18:21:43.667743  325699 kubeadm.go:601] duration metric: took 29.134937ms to restartPrimaryControlPlane
	I1010 18:21:43.667753  325699 kubeadm.go:402] duration metric: took 99.518506ms to StartCluster
	I1010 18:21:43.667770  325699 settings.go:142] acquiring lock: {Name:mk32701f7c6313a55b8740f0862889585a36e8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:43.667845  325699 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:21:43.669889  325699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:43.670281  325699 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:21:43.670407  325699 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 18:21:43.670513  325699 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-821769"
	I1010 18:21:43.670534  325699 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-821769"
	W1010 18:21:43.670546  325699 addons.go:247] addon storage-provisioner should already be in state true
	I1010 18:21:43.670545  325699 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-821769"
	I1010 18:21:43.670572  325699 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-821769"
	I1010 18:21:43.670580  325699 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-821769"
	W1010 18:21:43.670582  325699 addons.go:247] addon dashboard should already be in state true
	I1010 18:21:43.670595  325699 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-821769"
	I1010 18:21:43.670677  325699 host.go:66] Checking if "default-k8s-diff-port-821769" exists ...
	I1010 18:21:43.670572  325699 host.go:66] Checking if "default-k8s-diff-port-821769" exists ...
	I1010 18:21:43.670904  325699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:21:43.671151  325699 config.go:182] Loaded profile config "default-k8s-diff-port-821769": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:21:43.671356  325699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:21:43.672130  325699 out.go:179] * Verifying Kubernetes components...
	I1010 18:21:43.672709  325699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:21:43.673037  325699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:21:43.701170  325699 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:21:43.703152  325699 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:21:43.703189  325699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 18:21:43.703293  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:43.709767  325699 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-821769"
	W1010 18:21:43.709840  325699 addons.go:247] addon default-storageclass should already be in state true
	I1010 18:21:43.709890  325699 host.go:66] Checking if "default-k8s-diff-port-821769" exists ...
	I1010 18:21:43.710622  325699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:21:43.711556  325699 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1010 18:21:43.715168  325699 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1010 18:21:43.716093  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1010 18:21:43.716116  325699 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1010 18:21:43.716174  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:43.745595  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:43.754680  325699 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 18:21:43.754766  325699 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 18:21:43.754853  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:43.766642  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:43.784887  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:43.856990  325699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:21:43.873309  325699 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-821769" to be "Ready" ...
	I1010 18:21:43.936166  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1010 18:21:43.936223  325699 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1010 18:21:43.955509  325699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 18:21:43.956951  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1010 18:21:43.956971  325699 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1010 18:21:43.985048  325699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:21:43.985772  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1010 18:21:43.986042  325699 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1010 18:21:44.008589  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1010 18:21:44.008614  325699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1010 18:21:44.034035  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1010 18:21:44.034165  325699 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1010 18:21:44.061163  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1010 18:21:44.061253  325699 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1010 18:21:44.112492  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1010 18:21:44.112518  325699 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1010 18:21:44.149803  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1010 18:21:44.149896  325699 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1010 18:21:44.172145  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1010 18:21:44.172172  325699 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1010 18:21:44.191656  325699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1010 18:21:45.474823  325699 node_ready.go:49] node "default-k8s-diff-port-821769" is "Ready"
	I1010 18:21:45.474857  325699 node_ready.go:38] duration metric: took 1.601510652s for node "default-k8s-diff-port-821769" to be "Ready" ...
	I1010 18:21:45.474873  325699 api_server.go:52] waiting for apiserver process to appear ...
	I1010 18:21:45.474923  325699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 18:21:45.570164  325699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.614616389s)
	I1010 18:21:46.101989  325699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.116012627s)
	I1010 18:21:46.102157  325699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.910456027s)
	I1010 18:21:46.102189  325699 api_server.go:72] duration metric: took 2.431862039s to wait for apiserver process to appear ...
	I1010 18:21:46.102205  325699 api_server.go:88] waiting for apiserver healthz status ...
	I1010 18:21:46.102226  325699 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1010 18:21:46.103626  325699 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-821769 addons enable metrics-server
	
	I1010 18:21:46.104750  325699 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	
	
	==> CRI-O <==
	Oct 10 18:21:06 no-preload-556024 crio[559]: time="2025-10-10T18:21:06.367152747Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 10 18:21:06 no-preload-556024 crio[559]: time="2025-10-10T18:21:06.373563306Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 10 18:21:06 no-preload-556024 crio[559]: time="2025-10-10T18:21:06.373596731Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 10 18:21:19 no-preload-556024 crio[559]: time="2025-10-10T18:21:19.466237791Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8cf0cbb2-f128-4d97-9fc0-1ed24853303f name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:21:19 no-preload-556024 crio[559]: time="2025-10-10T18:21:19.468637708Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9bcdd185-8d4f-4a45-b8ea-c8c7a13f6651 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:21:19 no-preload-556024 crio[559]: time="2025-10-10T18:21:19.471655034Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-trwt5/dashboard-metrics-scraper" id=708874af-ce07-47d7-a80b-ede21344c52a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:21:19 no-preload-556024 crio[559]: time="2025-10-10T18:21:19.473376693Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:19 no-preload-556024 crio[559]: time="2025-10-10T18:21:19.479306768Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:19 no-preload-556024 crio[559]: time="2025-10-10T18:21:19.479720775Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:19 no-preload-556024 crio[559]: time="2025-10-10T18:21:19.504019149Z" level=info msg="Created container 2a18fa8993b6454a243ebedac42429e502364ba0ed77ebf8041dcadcd9e5da7a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-trwt5/dashboard-metrics-scraper" id=708874af-ce07-47d7-a80b-ede21344c52a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:21:19 no-preload-556024 crio[559]: time="2025-10-10T18:21:19.504592924Z" level=info msg="Starting container: 2a18fa8993b6454a243ebedac42429e502364ba0ed77ebf8041dcadcd9e5da7a" id=b97c642b-38c8-4c85-8be5-aaf11156313b name=/runtime.v1.RuntimeService/StartContainer
	Oct 10 18:21:19 no-preload-556024 crio[559]: time="2025-10-10T18:21:19.506323185Z" level=info msg="Started container" PID=1729 containerID=2a18fa8993b6454a243ebedac42429e502364ba0ed77ebf8041dcadcd9e5da7a description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-trwt5/dashboard-metrics-scraper id=b97c642b-38c8-4c85-8be5-aaf11156313b name=/runtime.v1.RuntimeService/StartContainer sandboxID=917e06fdf420b0e993687a4384a79ceb85dbd499fb2362a355ed46f5bd86a3ce
	Oct 10 18:21:19 no-preload-556024 crio[559]: time="2025-10-10T18:21:19.5945516Z" level=info msg="Removing container: c208dd7c502b680f922ae34d5a7fabb1f0db3bb1cfdd0d5f8b721f4e24e5fb89" id=db7503a5-7081-4611-9db9-8221053c9f51 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 10 18:21:19 no-preload-556024 crio[559]: time="2025-10-10T18:21:19.604121915Z" level=info msg="Removed container c208dd7c502b680f922ae34d5a7fabb1f0db3bb1cfdd0d5f8b721f4e24e5fb89: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-trwt5/dashboard-metrics-scraper" id=db7503a5-7081-4611-9db9-8221053c9f51 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 10 18:21:26 no-preload-556024 crio[559]: time="2025-10-10T18:21:26.613225812Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=433c6c3f-6a9d-4ff9-b420-344491cdc65a name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:21:26 no-preload-556024 crio[559]: time="2025-10-10T18:21:26.614244343Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=fa78571d-ebd3-4703-aa74-f3d3b76abd97 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:21:26 no-preload-556024 crio[559]: time="2025-10-10T18:21:26.615287147Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=cb0e6dff-a33a-4347-9347-7d42fde3c1f7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:21:26 no-preload-556024 crio[559]: time="2025-10-10T18:21:26.615550112Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:26 no-preload-556024 crio[559]: time="2025-10-10T18:21:26.623433825Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:26 no-preload-556024 crio[559]: time="2025-10-10T18:21:26.623641913Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e61a0504cf2b2fcf2fa6da1c51527e6d8a90fc4235bcabfbeb8a5316b55d2edf/merged/etc/passwd: no such file or directory"
	Oct 10 18:21:26 no-preload-556024 crio[559]: time="2025-10-10T18:21:26.623676486Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e61a0504cf2b2fcf2fa6da1c51527e6d8a90fc4235bcabfbeb8a5316b55d2edf/merged/etc/group: no such file or directory"
	Oct 10 18:21:26 no-preload-556024 crio[559]: time="2025-10-10T18:21:26.624007005Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:26 no-preload-556024 crio[559]: time="2025-10-10T18:21:26.654904307Z" level=info msg="Created container 881de1435156942e8e9fe01d8027baa3c3ac0ed457aa689f7625ac0b503df981: kube-system/storage-provisioner/storage-provisioner" id=cb0e6dff-a33a-4347-9347-7d42fde3c1f7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:21:26 no-preload-556024 crio[559]: time="2025-10-10T18:21:26.655548223Z" level=info msg="Starting container: 881de1435156942e8e9fe01d8027baa3c3ac0ed457aa689f7625ac0b503df981" id=cd3138c7-2bd7-4895-aaa0-9498c5b50f67 name=/runtime.v1.RuntimeService/StartContainer
	Oct 10 18:21:26 no-preload-556024 crio[559]: time="2025-10-10T18:21:26.657437355Z" level=info msg="Started container" PID=1743 containerID=881de1435156942e8e9fe01d8027baa3c3ac0ed457aa689f7625ac0b503df981 description=kube-system/storage-provisioner/storage-provisioner id=cd3138c7-2bd7-4895-aaa0-9498c5b50f67 name=/runtime.v1.RuntimeService/StartContainer sandboxID=72b88b3eed5c381b2168fb59d5d4149e5cf6a1e56dafac47bc05cd8c7a335646
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	881de14351569       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   72b88b3eed5c3       storage-provisioner                          kube-system
	2a18fa8993b64       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           27 seconds ago      Exited              dashboard-metrics-scraper   2                   917e06fdf420b       dashboard-metrics-scraper-6ffb444bf9-trwt5   kubernetes-dashboard
	e0dd2d726bc06       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   a39d5cf3ab316       kubernetes-dashboard-855c9754f9-75n29        kubernetes-dashboard
	80f9feb04d7e5       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   16838ce62a106       busybox                                      default
	1e3ad2e9d70e5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           51 seconds ago      Running             coredns                     0                   60480a45efd67       coredns-66bc5c9577-wpsrd                     kube-system
	ded19ae952b01       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           51 seconds ago      Running             kube-proxy                  0                   afea00e8e001e       kube-proxy-frchp                             kube-system
	7da7c710c0c97       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   72b88b3eed5c3       storage-provisioner                          kube-system
	58578d5735e6c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   05899d20e642f       kindnet-wsk6h                                kube-system
	624948aa983f6       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           56 seconds ago      Running             kube-apiserver              0                   1422d33c2807f       kube-apiserver-no-preload-556024             kube-system
	63abfddfe6fe2       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           56 seconds ago      Running             etcd                        0                   d12640afcfe0f       etcd-no-preload-556024                       kube-system
	579953ecaa5c7       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           56 seconds ago      Running             kube-controller-manager     0                   140a95566941e       kube-controller-manager-no-preload-556024    kube-system
	f690c75f2865b       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           56 seconds ago      Running             kube-scheduler              0                   b9a286367331a       kube-scheduler-no-preload-556024             kube-system
	
	
	==> coredns [1e3ad2e9d70e55e1c0f0706b095edba6bc813cc89953f666ce9c438a535fb038] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48022 - 36274 "HINFO IN 2880962916233392715.4244643584359087425. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02393549s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-556024
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-556024
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad692bf4ab89f0e135b80e730ae25010479ecc46
	                    minikube.k8s.io/name=no-preload-556024
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_10T18_19_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 10 Oct 2025 18:19:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-556024
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 10 Oct 2025 18:21:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 10 Oct 2025 18:21:24 +0000   Fri, 10 Oct 2025 18:19:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 10 Oct 2025 18:21:24 +0000   Fri, 10 Oct 2025 18:19:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 10 Oct 2025 18:21:24 +0000   Fri, 10 Oct 2025 18:19:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 10 Oct 2025 18:21:24 +0000   Fri, 10 Oct 2025 18:20:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-556024
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 6694834041ede3e9eb1b67e168e90e0c
	  System UUID:                5de188e9-37d1-4335-8d19-aac53380f91c
	  Boot ID:                    830c8438-99e6-48ba-b543-66e651cad0c8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-wpsrd                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     112s
	  kube-system                 etcd-no-preload-556024                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-wsk6h                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-no-preload-556024              250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-no-preload-556024     200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-frchp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-no-preload-556024              100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-trwt5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-75n29         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 110s                 kube-proxy       
	  Normal  Starting                 50s                  kube-proxy       
	  Normal  Starting                 2m3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m2s (x8 over 2m2s)  kubelet          Node no-preload-556024 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s (x8 over 2m2s)  kubelet          Node no-preload-556024 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s (x8 over 2m2s)  kubelet          Node no-preload-556024 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    117s                 kubelet          Node no-preload-556024 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  117s                 kubelet          Node no-preload-556024 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     117s                 kubelet          Node no-preload-556024 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           113s                 node-controller  Node no-preload-556024 event: Registered Node no-preload-556024 in Controller
	  Normal  NodeReady                97s                  kubelet          Node no-preload-556024 status is now: NodeReady
	  Normal  Starting                 57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)    kubelet          Node no-preload-556024 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)    kubelet          Node no-preload-556024 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)    kubelet          Node no-preload-556024 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                  node-controller  Node no-preload-556024 event: Registered Node no-preload-556024 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 95 0c 3e 92 2e 08 06
	[  +0.052845] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 a5 06 76 2d e3 08 06
	[ +11.354316] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff fa c6 ff 04 55 d6 08 06
	[  +7.101927] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e6 9b 73 27 8c 80 08 06
	[  +0.000350] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 a5 06 76 2d e3 08 06
	[  +6.287191] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 27 2d 28 d6 46 08 06
	[  +0.000293] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa c6 ff 04 55 d6 08 06
	[Oct10 18:19] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 8c 22 f6 6b cf 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e 29 bf 13 20 f9 08 06
	[ +15.511156] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e d6 74 aa 27 d0 08 06
	[  +0.008495] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 af 05 d4 db d1 08 06
	[Oct10 18:20] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 0b 54 33 52 4e 08 06
	[  +0.000597] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 af 05 d4 db d1 08 06
	
	
	==> etcd [63abfddfe6fe2887c4901b8e265aae05ec3330bd42bd0d67e011b354a39c6023] <==
	{"level":"warn","ts":"2025-10-10T18:20:53.620868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.628388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.642425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.651342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.659763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.667142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.675764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.683461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.692014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.719046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.726185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.735085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.743707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.754963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.762590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.771679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.779956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.787668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.799676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.804544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.812421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.827004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.835496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.843653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52388","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-10T18:21:35.889799Z","caller":"traceutil/trace.go:172","msg":"trace[2141740107] transaction","detail":"{read_only:false; response_revision:669; number_of_response:1; }","duration":"129.464837ms","start":"2025-10-10T18:21:35.760313Z","end":"2025-10-10T18:21:35.889778Z","steps":["trace[2141740107] 'process raft request'  (duration: 127.542372ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:21:47 up  1:04,  0 user,  load average: 5.81, 4.72, 2.99
	Linux no-preload-556024 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [58578d5735e6c09f8bee7a1bed1c2a6815baa58dec329a977a887f8e583cf301] <==
	I1010 18:20:56.052455       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1010 18:20:56.052723       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1010 18:20:56.052858       1 main.go:148] setting mtu 1500 for CNI 
	I1010 18:20:56.052873       1 main.go:178] kindnetd IP family: "ipv4"
	I1010 18:20:56.052890       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-10T18:20:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1010 18:20:56.351579       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1010 18:20:56.351616       1 controller.go:381] "Waiting for informer caches to sync"
	I1010 18:20:56.351628       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1010 18:20:56.352765       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1010 18:20:56.751746       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1010 18:20:56.751997       1 metrics.go:72] Registering metrics
	I1010 18:20:56.752176       1 controller.go:711] "Syncing nftables rules"
	I1010 18:21:06.351255       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1010 18:21:06.351335       1 main.go:301] handling current node
	I1010 18:21:16.353564       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1010 18:21:16.353604       1 main.go:301] handling current node
	I1010 18:21:26.352123       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1010 18:21:26.352159       1 main.go:301] handling current node
	I1010 18:21:36.356142       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1010 18:21:36.356223       1 main.go:301] handling current node
	I1010 18:21:46.360158       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1010 18:21:46.360202       1 main.go:301] handling current node
	
	
	==> kube-apiserver [624948aa983f6a950a5a86e99ebbf4e3cec99b2849460ed697524b3fc4ffac05] <==
	I1010 18:20:54.529463       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1010 18:20:54.529543       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1010 18:20:54.530452       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1010 18:20:54.530564       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1010 18:20:54.530622       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1010 18:20:54.530579       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1010 18:20:54.531818       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1010 18:20:54.531884       1 aggregator.go:171] initial CRD sync complete...
	I1010 18:20:54.531917       1 autoregister_controller.go:144] Starting autoregister controller
	I1010 18:20:54.531924       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1010 18:20:54.531931       1 cache.go:39] Caches are synced for autoregister controller
	I1010 18:20:54.530591       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1010 18:20:54.536297       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1010 18:20:54.582822       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1010 18:20:54.906855       1 controller.go:667] quota admission added evaluator for: namespaces
	I1010 18:20:54.953780       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1010 18:20:54.992716       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1010 18:20:55.001496       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1010 18:20:55.008893       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1010 18:20:55.052037       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.94.107"}
	I1010 18:20:55.078220       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.23.206"}
	I1010 18:20:55.434262       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1010 18:20:57.890245       1 controller.go:667] quota admission added evaluator for: endpoints
	I1010 18:20:58.246414       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1010 18:20:58.439689       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [579953ecaa5c709ae190ac505c57c31de755d4d689b3be28199b4f18c038f574] <==
	I1010 18:20:57.888242       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1010 18:20:57.888527       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1010 18:20:57.888717       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1010 18:20:57.891575       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1010 18:20:57.892460       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1010 18:20:57.894544       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1010 18:20:57.894620       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1010 18:20:57.899104       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1010 18:20:57.901377       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1010 18:20:57.901506       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1010 18:20:57.910776       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1010 18:20:57.913923       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1010 18:20:57.914036       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1010 18:20:57.914208       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-556024"
	I1010 18:20:57.914275       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1010 18:20:57.917896       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1010 18:20:57.921271       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1010 18:20:57.924512       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1010 18:20:57.926779       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1010 18:20:57.929088       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1010 18:20:57.931399       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1010 18:20:57.937300       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1010 18:20:57.937369       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1010 18:20:57.937387       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1010 18:20:57.937458       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	
	
	==> kube-proxy [ded19ae952b01a25d91e7233536d7b2a7e1abc59c551437700353661b7888410] <==
	I1010 18:20:56.006481       1 server_linux.go:53] "Using iptables proxy"
	I1010 18:20:56.065092       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1010 18:20:56.166356       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1010 18:20:56.166412       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1010 18:20:56.166513       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1010 18:20:56.191836       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1010 18:20:56.191911       1 server_linux.go:132] "Using iptables Proxier"
	I1010 18:20:56.198341       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1010 18:20:56.198839       1 server.go:527] "Version info" version="v1.34.1"
	I1010 18:20:56.198915       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 18:20:56.206152       1 config.go:403] "Starting serviceCIDR config controller"
	I1010 18:20:56.206176       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1010 18:20:56.206184       1 config.go:309] "Starting node config controller"
	I1010 18:20:56.206199       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1010 18:20:56.206207       1 config.go:200] "Starting service config controller"
	I1010 18:20:56.206213       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1010 18:20:56.206229       1 config.go:106] "Starting endpoint slice config controller"
	I1010 18:20:56.206234       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1010 18:20:56.306856       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1010 18:20:56.306983       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1010 18:20:56.306986       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1010 18:20:56.307029       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [f690c75f2865bf33ee267a92d360114ddc8d677ee96e0e894aa2e4d900fd9adf] <==
	I1010 18:20:51.679920       1 serving.go:386] Generated self-signed cert in-memory
	W1010 18:20:54.449453       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1010 18:20:54.449561       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1010 18:20:54.449596       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1010 18:20:54.449628       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1010 18:20:54.498339       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1010 18:20:54.499561       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 18:20:54.504850       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1010 18:20:54.504962       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1010 18:20:54.505467       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1010 18:20:54.507127       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1010 18:20:54.605690       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 10 18:20:58 no-preload-556024 kubelet[701]: I1010 18:20:58.495222     701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2209ed8b-b88a-45f4-a57a-36decaa54d79-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-75n29\" (UID: \"2209ed8b-b88a-45f4-a57a-36decaa54d79\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-75n29"
	Oct 10 18:20:58 no-preload-556024 kubelet[701]: I1010 18:20:58.495247     701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqt9x\" (UniqueName: \"kubernetes.io/projected/2209ed8b-b88a-45f4-a57a-36decaa54d79-kube-api-access-wqt9x\") pod \"kubernetes-dashboard-855c9754f9-75n29\" (UID: \"2209ed8b-b88a-45f4-a57a-36decaa54d79\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-75n29"
	Oct 10 18:21:00 no-preload-556024 kubelet[701]: I1010 18:21:00.614672     701 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 10 18:21:01 no-preload-556024 kubelet[701]: I1010 18:21:01.537613     701 scope.go:117] "RemoveContainer" containerID="8bd2d295cec21f02ffe4e195db323b302dc12842744232f9e87648aa06f4bce2"
	Oct 10 18:21:02 no-preload-556024 kubelet[701]: I1010 18:21:02.546683     701 scope.go:117] "RemoveContainer" containerID="c208dd7c502b680f922ae34d5a7fabb1f0db3bb1cfdd0d5f8b721f4e24e5fb89"
	Oct 10 18:21:02 no-preload-556024 kubelet[701]: I1010 18:21:02.547324     701 scope.go:117] "RemoveContainer" containerID="8bd2d295cec21f02ffe4e195db323b302dc12842744232f9e87648aa06f4bce2"
	Oct 10 18:21:02 no-preload-556024 kubelet[701]: E1010 18:21:02.548007     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-trwt5_kubernetes-dashboard(e4ce8751-5cd9-47b8-8093-bdcd167eabac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-trwt5" podUID="e4ce8751-5cd9-47b8-8093-bdcd167eabac"
	Oct 10 18:21:03 no-preload-556024 kubelet[701]: I1010 18:21:03.551001     701 scope.go:117] "RemoveContainer" containerID="c208dd7c502b680f922ae34d5a7fabb1f0db3bb1cfdd0d5f8b721f4e24e5fb89"
	Oct 10 18:21:03 no-preload-556024 kubelet[701]: E1010 18:21:03.551278     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-trwt5_kubernetes-dashboard(e4ce8751-5cd9-47b8-8093-bdcd167eabac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-trwt5" podUID="e4ce8751-5cd9-47b8-8093-bdcd167eabac"
	Oct 10 18:21:05 no-preload-556024 kubelet[701]: I1010 18:21:05.906309     701 scope.go:117] "RemoveContainer" containerID="c208dd7c502b680f922ae34d5a7fabb1f0db3bb1cfdd0d5f8b721f4e24e5fb89"
	Oct 10 18:21:05 no-preload-556024 kubelet[701]: E1010 18:21:05.906493     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-trwt5_kubernetes-dashboard(e4ce8751-5cd9-47b8-8093-bdcd167eabac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-trwt5" podUID="e4ce8751-5cd9-47b8-8093-bdcd167eabac"
	Oct 10 18:21:06 no-preload-556024 kubelet[701]: I1010 18:21:06.569804     701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-75n29" podStartSLOduration=1.367649614 podStartE2EDuration="8.569780413s" podCreationTimestamp="2025-10-10 18:20:58 +0000 UTC" firstStartedPulling="2025-10-10 18:20:58.691517188 +0000 UTC m=+8.350659870" lastFinishedPulling="2025-10-10 18:21:05.89364797 +0000 UTC m=+15.552790669" observedRunningTime="2025-10-10 18:21:06.569538451 +0000 UTC m=+16.228681152" watchObservedRunningTime="2025-10-10 18:21:06.569780413 +0000 UTC m=+16.228923114"
	Oct 10 18:21:19 no-preload-556024 kubelet[701]: I1010 18:21:19.465678     701 scope.go:117] "RemoveContainer" containerID="c208dd7c502b680f922ae34d5a7fabb1f0db3bb1cfdd0d5f8b721f4e24e5fb89"
	Oct 10 18:21:19 no-preload-556024 kubelet[701]: I1010 18:21:19.593281     701 scope.go:117] "RemoveContainer" containerID="c208dd7c502b680f922ae34d5a7fabb1f0db3bb1cfdd0d5f8b721f4e24e5fb89"
	Oct 10 18:21:19 no-preload-556024 kubelet[701]: I1010 18:21:19.593516     701 scope.go:117] "RemoveContainer" containerID="2a18fa8993b6454a243ebedac42429e502364ba0ed77ebf8041dcadcd9e5da7a"
	Oct 10 18:21:19 no-preload-556024 kubelet[701]: E1010 18:21:19.593727     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-trwt5_kubernetes-dashboard(e4ce8751-5cd9-47b8-8093-bdcd167eabac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-trwt5" podUID="e4ce8751-5cd9-47b8-8093-bdcd167eabac"
	Oct 10 18:21:25 no-preload-556024 kubelet[701]: I1010 18:21:25.907154     701 scope.go:117] "RemoveContainer" containerID="2a18fa8993b6454a243ebedac42429e502364ba0ed77ebf8041dcadcd9e5da7a"
	Oct 10 18:21:25 no-preload-556024 kubelet[701]: E1010 18:21:25.907338     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-trwt5_kubernetes-dashboard(e4ce8751-5cd9-47b8-8093-bdcd167eabac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-trwt5" podUID="e4ce8751-5cd9-47b8-8093-bdcd167eabac"
	Oct 10 18:21:26 no-preload-556024 kubelet[701]: I1010 18:21:26.612732     701 scope.go:117] "RemoveContainer" containerID="7da7c710c0c97e371e285306921a08629d485643a3d7a010a63878a9e851b4ff"
	Oct 10 18:21:36 no-preload-556024 kubelet[701]: I1010 18:21:36.465986     701 scope.go:117] "RemoveContainer" containerID="2a18fa8993b6454a243ebedac42429e502364ba0ed77ebf8041dcadcd9e5da7a"
	Oct 10 18:21:36 no-preload-556024 kubelet[701]: E1010 18:21:36.466256     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-trwt5_kubernetes-dashboard(e4ce8751-5cd9-47b8-8093-bdcd167eabac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-trwt5" podUID="e4ce8751-5cd9-47b8-8093-bdcd167eabac"
	Oct 10 18:21:44 no-preload-556024 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 10 18:21:44 no-preload-556024 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 10 18:21:44 no-preload-556024 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 10 18:21:44 no-preload-556024 systemd[1]: kubelet.service: Consumed 1.702s CPU time.
	
	
	==> kubernetes-dashboard [e0dd2d726bc067123461686a973a1bca5f3036eb38199d551b8302751e01c850] <==
	2025/10/10 18:21:05 Starting overwatch
	2025/10/10 18:21:05 Using namespace: kubernetes-dashboard
	2025/10/10 18:21:05 Using in-cluster config to connect to apiserver
	2025/10/10 18:21:05 Using secret token for csrf signing
	2025/10/10 18:21:05 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/10 18:21:05 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/10 18:21:05 Successful initial request to the apiserver, version: v1.34.1
	2025/10/10 18:21:05 Generating JWE encryption key
	2025/10/10 18:21:05 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/10 18:21:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/10 18:21:06 Initializing JWE encryption key from synchronized object
	2025/10/10 18:21:06 Creating in-cluster Sidecar client
	2025/10/10 18:21:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/10 18:21:06 Serving insecurely on HTTP port: 9090
	2025/10/10 18:21:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [7da7c710c0c97e371e285306921a08629d485643a3d7a010a63878a9e851b4ff] <==
	I1010 18:20:55.941717       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1010 18:21:25.946444       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [881de1435156942e8e9fe01d8027baa3c3ac0ed457aa689f7625ac0b503df981] <==
	I1010 18:21:26.673854       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1010 18:21:26.682163       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1010 18:21:26.682210       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1010 18:21:26.685013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:30.140562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:34.400926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:37.999245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:41.053925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:44.079252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:44.094198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1010 18:21:44.094969       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1010 18:21:44.095253       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-556024_24cc3ef3-641a-48a7-a62b-899ab2362c20!
	I1010 18:21:44.095280       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"239ef5d2-e469-4829-842f-94522e30a190", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-556024_24cc3ef3-641a-48a7-a62b-899ab2362c20 became leader
	W1010 18:21:44.103130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:44.125014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1010 18:21:44.195545       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-556024_24cc3ef3-641a-48a7-a62b-899ab2362c20!
	W1010 18:21:46.128265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:46.131819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-556024 -n no-preload-556024
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-556024 -n no-preload-556024: exit status 2 (328.156808ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-556024 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-556024
helpers_test.go:243: (dbg) docker inspect no-preload-556024:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6784c6613c75278df31ee5b4585740e0799e214fb638665fee04ee1f04ba890d",
	        "Created": "2025-10-10T18:19:17.136910644Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 316220,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-10T18:20:43.749189526Z",
	            "FinishedAt": "2025-10-10T18:20:42.849961702Z"
	        },
	        "Image": "sha256:84da1fc78d37190122f56c520913b0bfc454516bc5fdbdc209e2a5258afce8c3",
	        "ResolvConfPath": "/var/lib/docker/containers/6784c6613c75278df31ee5b4585740e0799e214fb638665fee04ee1f04ba890d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6784c6613c75278df31ee5b4585740e0799e214fb638665fee04ee1f04ba890d/hostname",
	        "HostsPath": "/var/lib/docker/containers/6784c6613c75278df31ee5b4585740e0799e214fb638665fee04ee1f04ba890d/hosts",
	        "LogPath": "/var/lib/docker/containers/6784c6613c75278df31ee5b4585740e0799e214fb638665fee04ee1f04ba890d/6784c6613c75278df31ee5b4585740e0799e214fb638665fee04ee1f04ba890d-json.log",
	        "Name": "/no-preload-556024",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-556024:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-556024",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6784c6613c75278df31ee5b4585740e0799e214fb638665fee04ee1f04ba890d",
	                "LowerDir": "/var/lib/docker/overlay2/0169bc262a39812abb813c6e4211d609913a24a3210914aa4b5ed073144773c7-init/diff:/var/lib/docker/overlay2/9995a0af7efc4d83e8e62526a6cf13ffc5df3bab5cee59077c863040f7e3e58d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0169bc262a39812abb813c6e4211d609913a24a3210914aa4b5ed073144773c7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0169bc262a39812abb813c6e4211d609913a24a3210914aa4b5ed073144773c7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0169bc262a39812abb813c6e4211d609913a24a3210914aa4b5ed073144773c7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-556024",
	                "Source": "/var/lib/docker/volumes/no-preload-556024/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-556024",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-556024",
	                "name.minikube.sigs.k8s.io": "no-preload-556024",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9ad80de52da0d84be3cdbb960e93146da42bd59d7fe2697ea533defed1406d68",
	            "SandboxKey": "/var/run/docker/netns/9ad80de52da0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-556024": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:d7:f4:d3:25:e1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "62177a68d9eb1c876ff604502e8d1e7d060441f560a7646d94ff4c9f62d14c4b",
	                    "EndpointID": "8ec5baf412200f5e3f904447918f976c0a0d33b68751fc7ec85d2e02e9f946f5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-556024",
	                        "6784c6613c75"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-556024 -n no-preload-556024
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-556024 -n no-preload-556024: exit status 2 (328.983644ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-556024 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-556024 logs -n 25: (1.322809799s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p bridge-078032                                                                                                                                                                                                                              │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-141193 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p old-k8s-version-141193 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:21 UTC │
	│ stop    │ -p embed-certs-472518 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ addons  │ enable metrics-server -p no-preload-556024 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ delete  │ -p disable-driver-mounts-523797                                                                                                                                                                                                               │ disable-driver-mounts-523797 │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p default-k8s-diff-port-821769 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:21 UTC │
	│ stop    │ -p no-preload-556024 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ addons  │ enable dashboard -p embed-certs-472518 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p embed-certs-472518 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:21 UTC │
	│ addons  │ enable dashboard -p no-preload-556024 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p no-preload-556024 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:21 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-821769 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-821769 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ image   │ old-k8s-version-141193 image list --format=json                                                                                                                                                                                               │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ pause   │ -p old-k8s-version-141193 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ delete  │ -p old-k8s-version-141193                                                                                                                                                                                                                     │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ delete  │ -p old-k8s-version-141193                                                                                                                                                                                                                     │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ start   │ -p newest-cni-121129 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-821769 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ start   │ -p default-k8s-diff-port-821769 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ image   │ no-preload-556024 image list --format=json                                                                                                                                                                                                    │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ pause   │ -p no-preload-556024 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ image   │ embed-certs-472518 image list --format=json                                                                                                                                                                                                   │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ pause   │ -p embed-certs-472518 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/10 18:21:36
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 18:21:36.443972  325699 out.go:360] Setting OutFile to fd 1 ...
	I1010 18:21:36.444232  325699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:21:36.444242  325699 out.go:374] Setting ErrFile to fd 2...
	I1010 18:21:36.444246  325699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:21:36.444423  325699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 18:21:36.444868  325699 out.go:368] Setting JSON to false
	I1010 18:21:36.445989  325699 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3836,"bootTime":1760116660,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 18:21:36.446111  325699 start.go:141] virtualization: kvm guest
	I1010 18:21:36.447655  325699 out.go:179] * [default-k8s-diff-port-821769] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1010 18:21:36.451745  325699 out.go:179]   - MINIKUBE_LOCATION=21724
	I1010 18:21:36.451794  325699 notify.go:220] Checking for updates...
	I1010 18:21:36.453782  325699 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 18:21:36.454903  325699 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:21:36.456168  325699 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-5815/.minikube
	I1010 18:21:36.457303  325699 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 18:21:36.458541  325699 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 18:21:36.460107  325699 config.go:182] Loaded profile config "default-k8s-diff-port-821769": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:21:36.460644  325699 driver.go:421] Setting default libvirt URI to qemu:///system
	I1010 18:21:36.487553  325699 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1010 18:21:36.487706  325699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:21:36.548644  325699 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:82 SystemTime:2025-10-10 18:21:36.539560881 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:21:36.548787  325699 docker.go:318] overlay module found
	I1010 18:21:36.550878  325699 out.go:179] * Using the docker driver based on existing profile
	I1010 18:21:31.750233  324649 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1010 18:21:31.750529  324649 start.go:159] libmachine.API.Create for "newest-cni-121129" (driver="docker")
	I1010 18:21:31.750565  324649 client.go:168] LocalClient.Create starting
	I1010 18:21:31.750670  324649 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem
	I1010 18:21:31.750723  324649 main.go:141] libmachine: Decoding PEM data...
	I1010 18:21:31.750746  324649 main.go:141] libmachine: Parsing certificate...
	I1010 18:21:31.750822  324649 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem
	I1010 18:21:31.750849  324649 main.go:141] libmachine: Decoding PEM data...
	I1010 18:21:31.750864  324649 main.go:141] libmachine: Parsing certificate...
	I1010 18:21:31.751250  324649 cli_runner.go:164] Run: docker network inspect newest-cni-121129 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1010 18:21:31.769180  324649 cli_runner.go:211] docker network inspect newest-cni-121129 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1010 18:21:31.769299  324649 network_create.go:284] running [docker network inspect newest-cni-121129] to gather additional debugging logs...
	I1010 18:21:31.769325  324649 cli_runner.go:164] Run: docker network inspect newest-cni-121129
	W1010 18:21:31.785789  324649 cli_runner.go:211] docker network inspect newest-cni-121129 returned with exit code 1
	I1010 18:21:31.785839  324649 network_create.go:287] error running [docker network inspect newest-cni-121129]: docker network inspect newest-cni-121129: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-121129 not found
	I1010 18:21:31.785860  324649 network_create.go:289] output of [docker network inspect newest-cni-121129]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-121129 not found
	
	** /stderr **
	I1010 18:21:31.785985  324649 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 18:21:31.803517  324649 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3f8fb0c8a54c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1a:51:a2:ab:ca:d6} reservation:<nil>}
	I1010 18:21:31.804204  324649 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bdbbffbd65c1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:de:11:33:77:48:20} reservation:<nil>}
	I1010 18:21:31.804907  324649 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0b6a5dab2001 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:93:a5:d3:c3:8f} reservation:<nil>}
	I1010 18:21:31.805493  324649 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-62177a68d9eb IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5e:70:f2:a2:da:00} reservation:<nil>}
	I1010 18:21:31.806333  324649 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f75590}
	I1010 18:21:31.806360  324649 network_create.go:124] attempt to create docker network newest-cni-121129 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1010 18:21:31.806398  324649 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-121129 newest-cni-121129
	I1010 18:21:31.865994  324649 network_create.go:108] docker network newest-cni-121129 192.168.85.0/24 created
	I1010 18:21:31.866029  324649 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-121129" container
	I1010 18:21:31.866140  324649 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1010 18:21:31.883599  324649 cli_runner.go:164] Run: docker volume create newest-cni-121129 --label name.minikube.sigs.k8s.io=newest-cni-121129 --label created_by.minikube.sigs.k8s.io=true
	I1010 18:21:31.901755  324649 oci.go:103] Successfully created a docker volume newest-cni-121129
	I1010 18:21:31.901834  324649 cli_runner.go:164] Run: docker run --rm --name newest-cni-121129-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-121129 --entrypoint /usr/bin/test -v newest-cni-121129:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 -d /var/lib
	I1010 18:21:32.316917  324649 oci.go:107] Successfully prepared a docker volume newest-cni-121129
	I1010 18:21:32.316960  324649 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:21:32.316979  324649 kic.go:194] Starting extracting preloaded images to volume ...
	I1010 18:21:32.317041  324649 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-121129:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1010 18:21:36.215225  324649 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-121129:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 -I lz4 -xf /preloaded.tar -C /extractDir: (3.898129423s)
	I1010 18:21:36.215274  324649 kic.go:203] duration metric: took 3.898290657s to extract preloaded images to volume ...
	W1010 18:21:36.215394  324649 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1010 18:21:36.215437  324649 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1010 18:21:36.215483  324649 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1010 18:21:36.276319  324649 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-121129 --name newest-cni-121129 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-121129 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-121129 --network newest-cni-121129 --ip 192.168.85.2 --volume newest-cni-121129:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6
	I1010 18:21:36.552156  325699 start.go:305] selected driver: docker
	I1010 18:21:36.552182  325699 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-821769 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-821769 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:21:36.552263  325699 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 18:21:36.552888  325699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:21:36.619123  325699 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-10 18:21:36.608354336 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:21:36.619511  325699 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:21:36.619549  325699 cni.go:84] Creating CNI manager for ""
	I1010 18:21:36.619602  325699 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:21:36.619655  325699 start.go:349] cluster config:
	{Name:default-k8s-diff-port-821769 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-821769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:21:36.621174  325699 out.go:179] * Starting "default-k8s-diff-port-821769" primary control-plane node in "default-k8s-diff-port-821769" cluster
	I1010 18:21:36.623163  325699 cache.go:123] Beginning downloading kic base image for docker with crio
	I1010 18:21:36.624439  325699 out.go:179] * Pulling base image v0.0.48-1760103811-21724 ...
	I1010 18:21:36.625488  325699 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:21:36.625524  325699 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1010 18:21:36.625536  325699 cache.go:58] Caching tarball of preloaded images
	I1010 18:21:36.625602  325699 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon
	I1010 18:21:36.625620  325699 preload.go:233] Found /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 18:21:36.625631  325699 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1010 18:21:36.625748  325699 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/config.json ...
	I1010 18:21:36.646734  325699 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon, skipping pull
	I1010 18:21:36.646759  325699 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 exists in daemon, skipping load
	I1010 18:21:36.646779  325699 cache.go:232] Successfully downloaded all kic artifacts
	I1010 18:21:36.646809  325699 start.go:360] acquireMachinesLock for default-k8s-diff-port-821769: {Name:mk32364aa6b9096e7aa0195f0d450a3e04b4f6f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:21:36.646879  325699 start.go:364] duration metric: took 45.359µs to acquireMachinesLock for "default-k8s-diff-port-821769"
	I1010 18:21:36.646912  325699 start.go:96] Skipping create...Using existing machine configuration
	I1010 18:21:36.646922  325699 fix.go:54] fixHost starting: 
	I1010 18:21:36.647229  325699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:21:36.665115  325699 fix.go:112] recreateIfNeeded on default-k8s-diff-port-821769: state=Stopped err=<nil>
	W1010 18:21:36.665142  325699 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 18:21:36.566005  324649 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Running}}
	I1010 18:21:36.587637  324649 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:21:36.609439  324649 cli_runner.go:164] Run: docker exec newest-cni-121129 stat /var/lib/dpkg/alternatives/iptables
	I1010 18:21:36.654885  324649 oci.go:144] the created container "newest-cni-121129" has a running status.
	I1010 18:21:36.654911  324649 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa...
	I1010 18:21:37.150404  324649 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1010 18:21:37.181411  324649 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:21:37.202450  324649 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1010 18:21:37.202483  324649 kic_runner.go:114] Args: [docker exec --privileged newest-cni-121129 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1010 18:21:37.249728  324649 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:21:37.274026  324649 machine.go:93] provisionDockerMachine start ...
	I1010 18:21:37.274139  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:37.295767  324649 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:37.296119  324649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1010 18:21:37.296140  324649 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 18:21:37.433206  324649 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-121129
	
	I1010 18:21:37.433232  324649 ubuntu.go:182] provisioning hostname "newest-cni-121129"
	I1010 18:21:37.433293  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:37.451228  324649 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:37.451497  324649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1010 18:21:37.451516  324649 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-121129 && echo "newest-cni-121129" | sudo tee /etc/hostname
	I1010 18:21:37.593295  324649 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-121129
	
	I1010 18:21:37.593411  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:37.611384  324649 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:37.611592  324649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1010 18:21:37.611611  324649 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-121129' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-121129/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-121129' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:21:37.744646  324649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:21:37.744678  324649 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-5815/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-5815/.minikube}
	I1010 18:21:37.744702  324649 ubuntu.go:190] setting up certificates
	I1010 18:21:37.744714  324649 provision.go:84] configureAuth start
	I1010 18:21:37.744775  324649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-121129
	I1010 18:21:37.762585  324649 provision.go:143] copyHostCerts
	I1010 18:21:37.762636  324649 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem, removing ...
	I1010 18:21:37.762644  324649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem
	I1010 18:21:37.762711  324649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem (1082 bytes)
	I1010 18:21:37.762804  324649 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem, removing ...
	I1010 18:21:37.762812  324649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem
	I1010 18:21:37.762837  324649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem (1123 bytes)
	I1010 18:21:37.762889  324649 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem, removing ...
	I1010 18:21:37.762896  324649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem
	I1010 18:21:37.762918  324649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem (1675 bytes)
	I1010 18:21:37.762968  324649 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem org=jenkins.newest-cni-121129 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-121129]
	I1010 18:21:38.017732  324649 provision.go:177] copyRemoteCerts
	I1010 18:21:38.017792  324649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:21:38.017828  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:38.035754  324649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:21:38.135582  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1010 18:21:38.158372  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1010 18:21:38.177887  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 18:21:38.197335  324649 provision.go:87] duration metric: took 452.609625ms to configureAuth
	I1010 18:21:38.197361  324649 ubuntu.go:206] setting minikube options for container-runtime
	I1010 18:21:38.197520  324649 config.go:182] Loaded profile config "newest-cni-121129": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:21:38.197616  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:38.215693  324649 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:38.215929  324649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1010 18:21:38.215945  324649 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:21:38.487590  324649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:21:38.487615  324649 machine.go:96] duration metric: took 1.213566349s to provisionDockerMachine
	I1010 18:21:38.487627  324649 client.go:171] duration metric: took 6.737054602s to LocalClient.Create
	I1010 18:21:38.487644  324649 start.go:167] duration metric: took 6.737116946s to libmachine.API.Create "newest-cni-121129"
	I1010 18:21:38.487653  324649 start.go:293] postStartSetup for "newest-cni-121129" (driver="docker")
	I1010 18:21:38.487667  324649 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:21:38.487718  324649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:21:38.487755  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:38.505301  324649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:21:38.604755  324649 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:21:38.608251  324649 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1010 18:21:38.608275  324649 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1010 18:21:38.608284  324649 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/addons for local assets ...
	I1010 18:21:38.608338  324649 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/files for local assets ...
	I1010 18:21:38.608407  324649 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem -> 93542.pem in /etc/ssl/certs
	I1010 18:21:38.608505  324649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:21:38.617071  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:21:38.639238  324649 start.go:296] duration metric: took 151.569017ms for postStartSetup
	I1010 18:21:38.639632  324649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-121129
	I1010 18:21:38.658650  324649 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/config.json ...
	I1010 18:21:38.658910  324649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1010 18:21:38.658972  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:38.676393  324649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:21:38.770086  324649 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1010 18:21:38.774771  324649 start.go:128] duration metric: took 7.026418609s to createHost
	I1010 18:21:38.774799  324649 start.go:83] releasing machines lock for "newest-cni-121129", held for 7.026572954s
	I1010 18:21:38.774867  324649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-121129
	I1010 18:21:38.794249  324649 ssh_runner.go:195] Run: cat /version.json
	I1010 18:21:38.794292  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:38.794343  324649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:21:38.794395  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:38.812781  324649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:21:38.813044  324649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:21:38.964620  324649 ssh_runner.go:195] Run: systemctl --version
	I1010 18:21:38.971493  324649 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:21:39.008047  324649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:21:39.012702  324649 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:21:39.012768  324649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:21:39.043167  324649 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 18:21:39.043195  324649 start.go:495] detecting cgroup driver to use...
	I1010 18:21:39.043236  324649 detect.go:190] detected "systemd" cgroup driver on host os
	I1010 18:21:39.043275  324649 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:21:39.060424  324649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:21:39.073422  324649 docker.go:218] disabling cri-docker service (if available) ...
	I1010 18:21:39.073477  324649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:21:39.090113  324649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:21:39.108184  324649 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:21:39.193075  324649 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:21:39.284238  324649 docker.go:234] disabling docker service ...
	I1010 18:21:39.284295  324649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:21:39.303174  324649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:21:39.316224  324649 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:21:39.401593  324649 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:21:39.486478  324649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:21:39.499671  324649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:21:39.515336  324649 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1010 18:21:39.515393  324649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.526705  324649 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1010 18:21:39.526768  324649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.536968  324649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.546772  324649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.556927  324649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:21:39.566265  324649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.576240  324649 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.591514  324649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.601231  324649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:21:39.609546  324649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:21:39.617339  324649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:21:39.697520  324649 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:21:39.833447  324649 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:21:39.833510  324649 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:21:39.837650  324649 start.go:563] Will wait 60s for crictl version
	I1010 18:21:39.837706  324649 ssh_runner.go:195] Run: which crictl
	I1010 18:21:39.841778  324649 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1010 18:21:39.866403  324649 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1010 18:21:39.866489  324649 ssh_runner.go:195] Run: crio --version
	I1010 18:21:39.894594  324649 ssh_runner.go:195] Run: crio --version
	I1010 18:21:39.923363  324649 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1010 18:21:39.924491  324649 cli_runner.go:164] Run: docker network inspect newest-cni-121129 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 18:21:39.942921  324649 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1010 18:21:39.947042  324649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:21:39.959308  324649 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1010 18:21:36.669200  325699 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-821769" ...
	I1010 18:21:36.669266  325699 cli_runner.go:164] Run: docker start default-k8s-diff-port-821769
	I1010 18:21:36.950209  325699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:21:36.973712  325699 kic.go:430] container "default-k8s-diff-port-821769" state is running.
	I1010 18:21:36.974205  325699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-821769
	I1010 18:21:36.999384  325699 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/config.json ...
	I1010 18:21:36.999678  325699 machine.go:93] provisionDockerMachine start ...
	I1010 18:21:36.999832  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:37.025140  325699 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:37.025476  325699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1010 18:21:37.025494  325699 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 18:21:37.026335  325699 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37242->127.0.0.1:33128: read: connection reset by peer
	I1010 18:21:40.162873  325699 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-821769
	
	I1010 18:21:40.162901  325699 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-821769"
	I1010 18:21:40.162999  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:40.189150  325699 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:40.189443  325699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1010 18:21:40.189466  325699 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-821769 && echo "default-k8s-diff-port-821769" | sudo tee /etc/hostname
	I1010 18:21:40.331478  325699 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-821769
	
	I1010 18:21:40.331570  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:40.349460  325699 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:40.349752  325699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1010 18:21:40.349789  325699 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-821769' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-821769/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-821769' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:21:40.495960  325699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:21:40.495988  325699 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-5815/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-5815/.minikube}
	I1010 18:21:40.496005  325699 ubuntu.go:190] setting up certificates
	I1010 18:21:40.496013  325699 provision.go:84] configureAuth start
	I1010 18:21:40.496106  325699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-821769
	I1010 18:21:40.515849  325699 provision.go:143] copyHostCerts
	I1010 18:21:40.515918  325699 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem, removing ...
	I1010 18:21:40.515937  325699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem
	I1010 18:21:40.516030  325699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem (1123 bytes)
	I1010 18:21:40.516170  325699 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem, removing ...
	I1010 18:21:40.516190  325699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem
	I1010 18:21:40.516240  325699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem (1675 bytes)
	I1010 18:21:40.516317  325699 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem, removing ...
	I1010 18:21:40.516328  325699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem
	I1010 18:21:40.516365  325699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem (1082 bytes)
	I1010 18:21:40.516437  325699 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-821769 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-821769 localhost minikube]
	I1010 18:21:40.621000  325699 provision.go:177] copyRemoteCerts
	I1010 18:21:40.621136  325699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:21:40.621199  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:40.639539  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:40.738484  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1010 18:21:40.758076  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1010 18:21:40.777450  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 18:21:40.796411  325699 provision.go:87] duration metric: took 300.38696ms to configureAuth
	I1010 18:21:40.796439  325699 ubuntu.go:206] setting minikube options for container-runtime
	I1010 18:21:40.796606  325699 config.go:182] Loaded profile config "default-k8s-diff-port-821769": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:21:40.796693  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:40.814633  325699 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:40.814851  325699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1010 18:21:40.814874  325699 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:21:41.126788  325699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:21:41.126818  325699 machine.go:96] duration metric: took 4.127117296s to provisionDockerMachine
	I1010 18:21:41.126831  325699 start.go:293] postStartSetup for "default-k8s-diff-port-821769" (driver="docker")
	I1010 18:21:41.126845  325699 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:21:41.126909  325699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:21:41.126956  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:41.146094  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:41.244401  325699 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:21:41.247953  325699 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1010 18:21:41.247984  325699 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1010 18:21:41.247996  325699 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/addons for local assets ...
	I1010 18:21:41.248060  325699 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/files for local assets ...
	I1010 18:21:41.248175  325699 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem -> 93542.pem in /etc/ssl/certs
	I1010 18:21:41.248266  325699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:21:41.256669  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:21:41.275845  325699 start.go:296] duration metric: took 149.001179ms for postStartSetup
	I1010 18:21:41.275913  325699 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1010 18:21:41.275950  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:41.294158  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:41.387292  325699 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1010 18:21:41.391952  325699 fix.go:56] duration metric: took 4.745025215s for fixHost
	I1010 18:21:41.391980  325699 start.go:83] releasing machines lock for "default-k8s-diff-port-821769", held for 4.745085816s
	I1010 18:21:41.392032  325699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-821769
	I1010 18:21:41.410356  325699 ssh_runner.go:195] Run: cat /version.json
	I1010 18:21:41.410400  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:41.410462  325699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:21:41.410537  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:41.428673  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:41.429174  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:39.960290  324649 kubeadm.go:883] updating cluster {Name:newest-cni-121129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 18:21:39.960390  324649 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:21:39.960442  324649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:21:39.991643  324649 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:21:39.991664  324649 crio.go:433] Images already preloaded, skipping extraction
	I1010 18:21:39.991716  324649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:21:40.018213  324649 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:21:40.018233  324649 cache_images.go:85] Images are preloaded, skipping loading
	I1010 18:21:40.018240  324649 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1010 18:21:40.018331  324649 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-121129 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:21:40.018427  324649 ssh_runner.go:195] Run: crio config
	I1010 18:21:40.065330  324649 cni.go:84] Creating CNI manager for ""
	I1010 18:21:40.065358  324649 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:21:40.065375  324649 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1010 18:21:40.065395  324649 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-121129 NodeName:newest-cni-121129 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 18:21:40.065508  324649 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-121129"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 18:21:40.065561  324649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1010 18:21:40.074911  324649 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 18:21:40.074973  324649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 18:21:40.083566  324649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1010 18:21:40.097986  324649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:21:40.114282  324649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1010 18:21:40.128847  324649 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1010 18:21:40.132698  324649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:21:40.143413  324649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:21:40.227094  324649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:21:40.249628  324649 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129 for IP: 192.168.85.2
	I1010 18:21:40.249652  324649 certs.go:195] generating shared ca certs ...
	I1010 18:21:40.249678  324649 certs.go:227] acquiring lock for ca certs: {Name:mkd2ebf34e0d6ec3a7809bed8325fdc7fe2fcc31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:40.249833  324649 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key
	I1010 18:21:40.249870  324649 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key
	I1010 18:21:40.249880  324649 certs.go:257] generating profile certs ...
	I1010 18:21:40.249964  324649 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.key
	I1010 18:21:40.249986  324649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.crt with IP's: []
	I1010 18:21:40.601463  324649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.crt ...
	I1010 18:21:40.601490  324649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.crt: {Name:mk644ed6d675dd6a538c02d2c8e614b2a15b3122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:40.601663  324649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.key ...
	I1010 18:21:40.601672  324649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.key: {Name:mk914b6f6ffa18eaa800e7d301f088828f088f03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:40.601751  324649 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key.89f266b7
	I1010 18:21:40.601767  324649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.crt.89f266b7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1010 18:21:41.352224  324649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.crt.89f266b7 ...
	I1010 18:21:41.352248  324649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.crt.89f266b7: {Name:mkdef5060ad4b077648f6c85a78fa3bbbb5e73d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:41.352404  324649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key.89f266b7 ...
	I1010 18:21:41.352424  324649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key.89f266b7: {Name:mkfea0f84cddcdc4e3c69624946502bcf937c477 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:41.352501  324649 certs.go:382] copying /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.crt.89f266b7 -> /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.crt
	I1010 18:21:41.352570  324649 certs.go:386] copying /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key.89f266b7 -> /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key
	I1010 18:21:41.352640  324649 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.key
	I1010 18:21:41.352657  324649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.crt with IP's: []
	I1010 18:21:41.590793  325699 ssh_runner.go:195] Run: systemctl --version
	I1010 18:21:41.597352  325699 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:21:41.632391  325699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:21:41.637267  325699 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:21:41.637329  325699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:21:41.646619  325699 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1010 18:21:41.646643  325699 start.go:495] detecting cgroup driver to use...
	I1010 18:21:41.646672  325699 detect.go:190] detected "systemd" cgroup driver on host os
	I1010 18:21:41.646707  325699 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:21:41.662702  325699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:21:41.675945  325699 docker.go:218] disabling cri-docker service (if available) ...
	I1010 18:21:41.675998  325699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:21:41.690577  325699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:21:41.703139  325699 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:21:41.785080  325699 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:21:41.887442  325699 docker.go:234] disabling docker service ...
	I1010 18:21:41.887510  325699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:21:41.902511  325699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:21:41.915792  325699 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:21:41.998153  325699 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:21:42.082320  325699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:21:42.095388  325699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:21:42.110606  325699 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1010 18:21:42.110668  325699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.120566  325699 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1010 18:21:42.120611  325699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.130445  325699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.140220  325699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.149997  325699 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:21:42.159172  325699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.168739  325699 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.177930  325699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.187922  325699 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:21:42.196256  325699 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:21:42.204604  325699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:21:42.288532  325699 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:21:42.425073  325699 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:21:42.425143  325699 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:21:42.429651  325699 start.go:563] Will wait 60s for crictl version
	I1010 18:21:42.429707  325699 ssh_runner.go:195] Run: which crictl
	I1010 18:21:42.433310  325699 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1010 18:21:42.459422  325699 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1010 18:21:42.459511  325699 ssh_runner.go:195] Run: crio --version
	I1010 18:21:42.491064  325699 ssh_runner.go:195] Run: crio --version
	I1010 18:21:42.523177  325699 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1010 18:21:42.524273  325699 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-821769 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 18:21:42.544600  325699 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1010 18:21:42.549336  325699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:21:42.561250  325699 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-821769 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-821769 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 18:21:42.561363  325699 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:21:42.561407  325699 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:21:42.595069  325699 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:21:42.595092  325699 crio.go:433] Images already preloaded, skipping extraction
	I1010 18:21:42.595137  325699 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:21:42.621683  325699 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:21:42.621708  325699 cache_images.go:85] Images are preloaded, skipping loading
	I1010 18:21:42.621718  325699 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.34.1 crio true true} ...
	I1010 18:21:42.621877  325699 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-821769 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-821769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:21:42.621955  325699 ssh_runner.go:195] Run: crio config
	I1010 18:21:42.670696  325699 cni.go:84] Creating CNI manager for ""
	I1010 18:21:42.670714  325699 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:21:42.670729  325699 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1010 18:21:42.670749  325699 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-821769 NodeName:default-k8s-diff-port-821769 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 18:21:42.670867  325699 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-821769"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 18:21:42.670920  325699 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1010 18:21:42.679913  325699 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 18:21:42.679968  325699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 18:21:42.688618  325699 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1010 18:21:42.703331  325699 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:21:42.718311  325699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1010 18:21:42.732968  325699 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1010 18:21:42.736868  325699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:21:42.747553  325699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:21:42.829086  325699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:21:42.858574  325699 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769 for IP: 192.168.103.2
	I1010 18:21:42.858598  325699 certs.go:195] generating shared ca certs ...
	I1010 18:21:42.858623  325699 certs.go:227] acquiring lock for ca certs: {Name:mkd2ebf34e0d6ec3a7809bed8325fdc7fe2fcc31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:42.858780  325699 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key
	I1010 18:21:42.858834  325699 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key
	I1010 18:21:42.858849  325699 certs.go:257] generating profile certs ...
	I1010 18:21:42.858967  325699 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/client.key
	I1010 18:21:42.859085  325699 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/apiserver.key.10168654
	I1010 18:21:42.859140  325699 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/proxy-client.key
	I1010 18:21:42.859285  325699 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem (1338 bytes)
	W1010 18:21:42.859321  325699 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354_empty.pem, impossibly tiny 0 bytes
	I1010 18:21:42.859336  325699 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem (1675 bytes)
	I1010 18:21:42.859370  325699 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem (1082 bytes)
	I1010 18:21:42.859399  325699 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:21:42.859429  325699 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem (1675 bytes)
	I1010 18:21:42.859481  325699 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:21:42.860204  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:21:42.882094  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:21:42.903468  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:21:42.925737  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1010 18:21:42.953372  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1010 18:21:42.973504  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 18:21:42.992899  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:21:43.011728  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 18:21:43.030624  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem --> /usr/share/ca-certificates/9354.pem (1338 bytes)
	I1010 18:21:43.049802  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /usr/share/ca-certificates/93542.pem (1708 bytes)
	I1010 18:21:43.070120  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:21:43.090039  325699 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 18:21:43.103785  325699 ssh_runner.go:195] Run: openssl version
	I1010 18:21:43.110111  325699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93542.pem && ln -fs /usr/share/ca-certificates/93542.pem /etc/ssl/certs/93542.pem"
	I1010 18:21:43.118950  325699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93542.pem
	I1010 18:21:43.122454  325699 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 17:36 /usr/share/ca-certificates/93542.pem
	I1010 18:21:43.122512  325699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93542.pem
	I1010 18:21:43.157901  325699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93542.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:21:43.167111  325699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:21:43.176248  325699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:21:43.179836  325699 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:21:43.179900  325699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:21:43.216894  325699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:21:43.226252  325699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9354.pem && ln -fs /usr/share/ca-certificates/9354.pem /etc/ssl/certs/9354.pem"
	I1010 18:21:43.235390  325699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9354.pem
	I1010 18:21:43.239321  325699 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 17:36 /usr/share/ca-certificates/9354.pem
	I1010 18:21:43.239380  325699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9354.pem
	I1010 18:21:43.273487  325699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9354.pem /etc/ssl/certs/51391683.0"
	I1010 18:21:43.282570  325699 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:21:43.286433  325699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 18:21:43.320357  325699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 18:21:43.361223  325699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 18:21:43.409478  325699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 18:21:43.456529  325699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 18:21:43.512033  325699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 18:21:43.568244  325699 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-821769 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-821769 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:21:43.568348  325699 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 18:21:43.568440  325699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 18:21:43.611528  325699 cri.go:89] found id: "1352ca41b0e7626fbf6ee43638506dfab18bd157572e9128f411ac1c5ae54538"
	I1010 18:21:43.611555  325699 cri.go:89] found id: "2aeadcb9e03cc805af5eff4f1b521299f31e4d618387d10eef543b4e95787f70"
	I1010 18:21:43.611560  325699 cri.go:89] found id: "6c6e229b2a8311cf4d60aad6c602e02c2923b5ba2309e536076e40579456e8e2"
	I1010 18:21:43.611565  325699 cri.go:89] found id: "c3f03c923ad6830325d9888fdf2ad9de25ac73298e25b5812f72951d65af2eec"
	I1010 18:21:43.611569  325699 cri.go:89] found id: ""
	I1010 18:21:43.611612  325699 ssh_runner.go:195] Run: sudo runc list -f json
	W1010 18:21:43.627173  325699 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:21:43Z" level=error msg="open /run/runc: no such file or directory"
	I1010 18:21:43.627256  325699 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 18:21:43.638581  325699 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1010 18:21:43.638602  325699 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1010 18:21:43.638652  325699 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 18:21:43.650423  325699 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 18:21:43.651568  325699 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-821769" does not appear in /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:21:43.652341  325699 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-5815/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-821769" cluster setting kubeconfig missing "default-k8s-diff-port-821769" context setting]
	I1010 18:21:43.653567  325699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:43.655682  325699 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 18:21:43.667709  325699 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1010 18:21:43.667743  325699 kubeadm.go:601] duration metric: took 29.134937ms to restartPrimaryControlPlane
	I1010 18:21:43.667753  325699 kubeadm.go:402] duration metric: took 99.518506ms to StartCluster
	I1010 18:21:43.667770  325699 settings.go:142] acquiring lock: {Name:mk32701f7c6313a55b8740f0862889585a36e8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:43.667845  325699 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:21:43.669889  325699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:43.670281  325699 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:21:43.670407  325699 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 18:21:43.670513  325699 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-821769"
	I1010 18:21:43.670534  325699 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-821769"
	W1010 18:21:43.670546  325699 addons.go:247] addon storage-provisioner should already be in state true
	I1010 18:21:43.670545  325699 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-821769"
	I1010 18:21:43.670572  325699 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-821769"
	I1010 18:21:43.670580  325699 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-821769"
	W1010 18:21:43.670582  325699 addons.go:247] addon dashboard should already be in state true
	I1010 18:21:43.670595  325699 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-821769"
	I1010 18:21:43.670677  325699 host.go:66] Checking if "default-k8s-diff-port-821769" exists ...
	I1010 18:21:43.670572  325699 host.go:66] Checking if "default-k8s-diff-port-821769" exists ...
	I1010 18:21:43.670904  325699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:21:43.671151  325699 config.go:182] Loaded profile config "default-k8s-diff-port-821769": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:21:43.671356  325699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:21:43.672130  325699 out.go:179] * Verifying Kubernetes components...
	I1010 18:21:43.672709  325699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:21:43.673037  325699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:21:43.701170  325699 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:21:43.703152  325699 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:21:43.703189  325699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 18:21:43.703293  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:43.709767  325699 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-821769"
	W1010 18:21:43.709840  325699 addons.go:247] addon default-storageclass should already be in state true
	I1010 18:21:43.709890  325699 host.go:66] Checking if "default-k8s-diff-port-821769" exists ...
	I1010 18:21:43.710622  325699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:21:43.711556  325699 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1010 18:21:43.715168  325699 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1010 18:21:43.716093  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1010 18:21:43.716116  325699 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1010 18:21:43.716174  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:43.745595  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:43.754680  325699 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 18:21:43.754766  325699 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 18:21:43.754853  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:43.766642  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:43.784887  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:43.856990  325699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:21:43.873309  325699 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-821769" to be "Ready" ...
	I1010 18:21:43.936166  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1010 18:21:43.936223  325699 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1010 18:21:43.955509  325699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 18:21:43.956951  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1010 18:21:43.956971  325699 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1010 18:21:43.985048  325699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:21:43.985772  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1010 18:21:43.986042  325699 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1010 18:21:44.008589  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1010 18:21:44.008614  325699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1010 18:21:44.034035  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1010 18:21:44.034165  325699 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1010 18:21:44.061163  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1010 18:21:44.061253  325699 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1010 18:21:44.112492  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1010 18:21:44.112518  325699 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1010 18:21:44.149803  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1010 18:21:44.149896  325699 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1010 18:21:44.172145  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1010 18:21:44.172172  325699 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1010 18:21:44.191656  325699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1010 18:21:45.474823  325699 node_ready.go:49] node "default-k8s-diff-port-821769" is "Ready"
	I1010 18:21:45.474857  325699 node_ready.go:38] duration metric: took 1.601510652s for node "default-k8s-diff-port-821769" to be "Ready" ...
	I1010 18:21:45.474873  325699 api_server.go:52] waiting for apiserver process to appear ...
	I1010 18:21:45.474923  325699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 18:21:45.570164  325699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.614616389s)
	I1010 18:21:46.101989  325699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.116012627s)
	I1010 18:21:46.102157  325699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.910456027s)
	I1010 18:21:46.102189  325699 api_server.go:72] duration metric: took 2.431862039s to wait for apiserver process to appear ...
	I1010 18:21:46.102205  325699 api_server.go:88] waiting for apiserver healthz status ...
	I1010 18:21:46.102226  325699 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1010 18:21:46.103626  325699 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-821769 addons enable metrics-server
	
	I1010 18:21:46.104750  325699 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1010 18:21:46.105672  325699 addons.go:514] duration metric: took 2.435260331s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1010 18:21:46.106650  325699 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 18:21:46.106667  325699 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 18:21:41.799013  324649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.crt ...
	I1010 18:21:41.799039  324649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.crt: {Name:mk0669ceb9e9a4f760f7827d6d6abc6856417c2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:41.799218  324649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.key ...
	I1010 18:21:41.799235  324649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.key: {Name:mk52379a2bae9262f9822bb1871c3d07af332ca7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:41.799416  324649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem (1338 bytes)
	W1010 18:21:41.799450  324649 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354_empty.pem, impossibly tiny 0 bytes
	I1010 18:21:41.799460  324649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem (1675 bytes)
	I1010 18:21:41.799483  324649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem (1082 bytes)
	I1010 18:21:41.799510  324649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:21:41.799531  324649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem (1675 bytes)
	I1010 18:21:41.799566  324649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:21:41.800117  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:21:41.825626  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:21:41.848227  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:21:41.869328  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1010 18:21:41.890966  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1010 18:21:41.911349  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 18:21:41.931728  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:21:41.958462  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 18:21:41.979538  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:21:42.001499  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem --> /usr/share/ca-certificates/9354.pem (1338 bytes)
	I1010 18:21:42.024216  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /usr/share/ca-certificates/93542.pem (1708 bytes)
	I1010 18:21:42.046423  324649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 18:21:42.061125  324649 ssh_runner.go:195] Run: openssl version
	I1010 18:21:42.067212  324649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:21:42.076458  324649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:21:42.080715  324649 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:21:42.080767  324649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:21:42.116344  324649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:21:42.126106  324649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9354.pem && ln -fs /usr/share/ca-certificates/9354.pem /etc/ssl/certs/9354.pem"
	I1010 18:21:42.135627  324649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9354.pem
	I1010 18:21:42.139483  324649 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 17:36 /usr/share/ca-certificates/9354.pem
	I1010 18:21:42.139535  324649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9354.pem
	I1010 18:21:42.177476  324649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9354.pem /etc/ssl/certs/51391683.0"
	I1010 18:21:42.187546  324649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93542.pem && ln -fs /usr/share/ca-certificates/93542.pem /etc/ssl/certs/93542.pem"
	I1010 18:21:42.196815  324649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93542.pem
	I1010 18:21:42.200662  324649 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 17:36 /usr/share/ca-certificates/93542.pem
	I1010 18:21:42.200712  324649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93542.pem
	I1010 18:21:42.244485  324649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93542.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:21:42.254296  324649 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:21:42.258045  324649 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1010 18:21:42.258109  324649 kubeadm.go:400] StartCluster: {Name:newest-cni-121129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:21:42.258208  324649 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 18:21:42.258261  324649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 18:21:42.287523  324649 cri.go:89] found id: ""
	I1010 18:21:42.287614  324649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 18:21:42.296824  324649 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 18:21:42.306175  324649 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1010 18:21:42.306236  324649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 18:21:42.314702  324649 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 18:21:42.314726  324649 kubeadm.go:157] found existing configuration files:
	
	I1010 18:21:42.314769  324649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 18:21:42.323157  324649 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 18:21:42.323223  324649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 18:21:42.331276  324649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 18:21:42.340535  324649 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 18:21:42.340585  324649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 18:21:42.349218  324649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 18:21:42.357935  324649 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 18:21:42.357997  324649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 18:21:42.366285  324649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 18:21:42.374774  324649 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 18:21:42.374815  324649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 18:21:42.383131  324649 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1010 18:21:42.446332  324649 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1010 18:21:42.516841  324649 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> CRI-O <==
	Oct 10 18:21:06 no-preload-556024 crio[559]: time="2025-10-10T18:21:06.367152747Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 10 18:21:06 no-preload-556024 crio[559]: time="2025-10-10T18:21:06.373563306Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 10 18:21:06 no-preload-556024 crio[559]: time="2025-10-10T18:21:06.373596731Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 10 18:21:19 no-preload-556024 crio[559]: time="2025-10-10T18:21:19.466237791Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8cf0cbb2-f128-4d97-9fc0-1ed24853303f name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:21:19 no-preload-556024 crio[559]: time="2025-10-10T18:21:19.468637708Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9bcdd185-8d4f-4a45-b8ea-c8c7a13f6651 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:21:19 no-preload-556024 crio[559]: time="2025-10-10T18:21:19.471655034Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-trwt5/dashboard-metrics-scraper" id=708874af-ce07-47d7-a80b-ede21344c52a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:21:19 no-preload-556024 crio[559]: time="2025-10-10T18:21:19.473376693Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:19 no-preload-556024 crio[559]: time="2025-10-10T18:21:19.479306768Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:19 no-preload-556024 crio[559]: time="2025-10-10T18:21:19.479720775Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:19 no-preload-556024 crio[559]: time="2025-10-10T18:21:19.504019149Z" level=info msg="Created container 2a18fa8993b6454a243ebedac42429e502364ba0ed77ebf8041dcadcd9e5da7a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-trwt5/dashboard-metrics-scraper" id=708874af-ce07-47d7-a80b-ede21344c52a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:21:19 no-preload-556024 crio[559]: time="2025-10-10T18:21:19.504592924Z" level=info msg="Starting container: 2a18fa8993b6454a243ebedac42429e502364ba0ed77ebf8041dcadcd9e5da7a" id=b97c642b-38c8-4c85-8be5-aaf11156313b name=/runtime.v1.RuntimeService/StartContainer
	Oct 10 18:21:19 no-preload-556024 crio[559]: time="2025-10-10T18:21:19.506323185Z" level=info msg="Started container" PID=1729 containerID=2a18fa8993b6454a243ebedac42429e502364ba0ed77ebf8041dcadcd9e5da7a description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-trwt5/dashboard-metrics-scraper id=b97c642b-38c8-4c85-8be5-aaf11156313b name=/runtime.v1.RuntimeService/StartContainer sandboxID=917e06fdf420b0e993687a4384a79ceb85dbd499fb2362a355ed46f5bd86a3ce
	Oct 10 18:21:19 no-preload-556024 crio[559]: time="2025-10-10T18:21:19.5945516Z" level=info msg="Removing container: c208dd7c502b680f922ae34d5a7fabb1f0db3bb1cfdd0d5f8b721f4e24e5fb89" id=db7503a5-7081-4611-9db9-8221053c9f51 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 10 18:21:19 no-preload-556024 crio[559]: time="2025-10-10T18:21:19.604121915Z" level=info msg="Removed container c208dd7c502b680f922ae34d5a7fabb1f0db3bb1cfdd0d5f8b721f4e24e5fb89: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-trwt5/dashboard-metrics-scraper" id=db7503a5-7081-4611-9db9-8221053c9f51 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 10 18:21:26 no-preload-556024 crio[559]: time="2025-10-10T18:21:26.613225812Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=433c6c3f-6a9d-4ff9-b420-344491cdc65a name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:21:26 no-preload-556024 crio[559]: time="2025-10-10T18:21:26.614244343Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=fa78571d-ebd3-4703-aa74-f3d3b76abd97 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:21:26 no-preload-556024 crio[559]: time="2025-10-10T18:21:26.615287147Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=cb0e6dff-a33a-4347-9347-7d42fde3c1f7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:21:26 no-preload-556024 crio[559]: time="2025-10-10T18:21:26.615550112Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:26 no-preload-556024 crio[559]: time="2025-10-10T18:21:26.623433825Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:26 no-preload-556024 crio[559]: time="2025-10-10T18:21:26.623641913Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e61a0504cf2b2fcf2fa6da1c51527e6d8a90fc4235bcabfbeb8a5316b55d2edf/merged/etc/passwd: no such file or directory"
	Oct 10 18:21:26 no-preload-556024 crio[559]: time="2025-10-10T18:21:26.623676486Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e61a0504cf2b2fcf2fa6da1c51527e6d8a90fc4235bcabfbeb8a5316b55d2edf/merged/etc/group: no such file or directory"
	Oct 10 18:21:26 no-preload-556024 crio[559]: time="2025-10-10T18:21:26.624007005Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:26 no-preload-556024 crio[559]: time="2025-10-10T18:21:26.654904307Z" level=info msg="Created container 881de1435156942e8e9fe01d8027baa3c3ac0ed457aa689f7625ac0b503df981: kube-system/storage-provisioner/storage-provisioner" id=cb0e6dff-a33a-4347-9347-7d42fde3c1f7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:21:26 no-preload-556024 crio[559]: time="2025-10-10T18:21:26.655548223Z" level=info msg="Starting container: 881de1435156942e8e9fe01d8027baa3c3ac0ed457aa689f7625ac0b503df981" id=cd3138c7-2bd7-4895-aaa0-9498c5b50f67 name=/runtime.v1.RuntimeService/StartContainer
	Oct 10 18:21:26 no-preload-556024 crio[559]: time="2025-10-10T18:21:26.657437355Z" level=info msg="Started container" PID=1743 containerID=881de1435156942e8e9fe01d8027baa3c3ac0ed457aa689f7625ac0b503df981 description=kube-system/storage-provisioner/storage-provisioner id=cd3138c7-2bd7-4895-aaa0-9498c5b50f67 name=/runtime.v1.RuntimeService/StartContainer sandboxID=72b88b3eed5c381b2168fb59d5d4149e5cf6a1e56dafac47bc05cd8c7a335646
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	881de14351569       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   72b88b3eed5c3       storage-provisioner                          kube-system
	2a18fa8993b64       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           29 seconds ago      Exited              dashboard-metrics-scraper   2                   917e06fdf420b       dashboard-metrics-scraper-6ffb444bf9-trwt5   kubernetes-dashboard
	e0dd2d726bc06       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   a39d5cf3ab316       kubernetes-dashboard-855c9754f9-75n29        kubernetes-dashboard
	80f9feb04d7e5       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   16838ce62a106       busybox                                      default
	1e3ad2e9d70e5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   60480a45efd67       coredns-66bc5c9577-wpsrd                     kube-system
	ded19ae952b01       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           53 seconds ago      Running             kube-proxy                  0                   afea00e8e001e       kube-proxy-frchp                             kube-system
	7da7c710c0c97       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   72b88b3eed5c3       storage-provisioner                          kube-system
	58578d5735e6c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   05899d20e642f       kindnet-wsk6h                                kube-system
	624948aa983f6       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           58 seconds ago      Running             kube-apiserver              0                   1422d33c2807f       kube-apiserver-no-preload-556024             kube-system
	63abfddfe6fe2       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           58 seconds ago      Running             etcd                        0                   d12640afcfe0f       etcd-no-preload-556024                       kube-system
	579953ecaa5c7       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           58 seconds ago      Running             kube-controller-manager     0                   140a95566941e       kube-controller-manager-no-preload-556024    kube-system
	f690c75f2865b       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           58 seconds ago      Running             kube-scheduler              0                   b9a286367331a       kube-scheduler-no-preload-556024             kube-system
	
	
	==> coredns [1e3ad2e9d70e55e1c0f0706b095edba6bc813cc89953f666ce9c438a535fb038] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48022 - 36274 "HINFO IN 2880962916233392715.4244643584359087425. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02393549s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-556024
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-556024
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad692bf4ab89f0e135b80e730ae25010479ecc46
	                    minikube.k8s.io/name=no-preload-556024
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_10T18_19_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 10 Oct 2025 18:19:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-556024
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 10 Oct 2025 18:21:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 10 Oct 2025 18:21:24 +0000   Fri, 10 Oct 2025 18:19:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 10 Oct 2025 18:21:24 +0000   Fri, 10 Oct 2025 18:19:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 10 Oct 2025 18:21:24 +0000   Fri, 10 Oct 2025 18:19:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 10 Oct 2025 18:21:24 +0000   Fri, 10 Oct 2025 18:20:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-556024
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 6694834041ede3e9eb1b67e168e90e0c
	  System UUID:                5de188e9-37d1-4335-8d19-aac53380f91c
	  Boot ID:                    830c8438-99e6-48ba-b543-66e651cad0c8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-wpsrd                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     114s
	  kube-system                 etcd-no-preload-556024                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         119s
	  kube-system                 kindnet-wsk6h                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-no-preload-556024              250m (3%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-no-preload-556024     200m (2%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-frchp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-no-preload-556024              100m (1%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-trwt5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-75n29         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 112s                 kube-proxy       
	  Normal  Starting                 53s                  kube-proxy       
	  Normal  Starting                 2m5s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m4s (x8 over 2m4s)  kubelet          Node no-preload-556024 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s (x8 over 2m4s)  kubelet          Node no-preload-556024 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s (x8 over 2m4s)  kubelet          Node no-preload-556024 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    119s                 kubelet          Node no-preload-556024 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  119s                 kubelet          Node no-preload-556024 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     119s                 kubelet          Node no-preload-556024 status is now: NodeHasSufficientPID
	  Normal  Starting                 119s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           115s                 node-controller  Node no-preload-556024 event: Registered Node no-preload-556024 in Controller
	  Normal  NodeReady                99s                  kubelet          Node no-preload-556024 status is now: NodeReady
	  Normal  Starting                 59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)    kubelet          Node no-preload-556024 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)    kubelet          Node no-preload-556024 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)    kubelet          Node no-preload-556024 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                  node-controller  Node no-preload-556024 event: Registered Node no-preload-556024 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 95 0c 3e 92 2e 08 06
	[  +0.052845] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 a5 06 76 2d e3 08 06
	[ +11.354316] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff fa c6 ff 04 55 d6 08 06
	[  +7.101927] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e6 9b 73 27 8c 80 08 06
	[  +0.000350] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 a5 06 76 2d e3 08 06
	[  +6.287191] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 27 2d 28 d6 46 08 06
	[  +0.000293] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa c6 ff 04 55 d6 08 06
	[Oct10 18:19] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 8c 22 f6 6b cf 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e 29 bf 13 20 f9 08 06
	[ +15.511156] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e d6 74 aa 27 d0 08 06
	[  +0.008495] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 af 05 d4 db d1 08 06
	[Oct10 18:20] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 0b 54 33 52 4e 08 06
	[  +0.000597] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 af 05 d4 db d1 08 06
	
	
	==> etcd [63abfddfe6fe2887c4901b8e265aae05ec3330bd42bd0d67e011b354a39c6023] <==
	{"level":"warn","ts":"2025-10-10T18:20:53.620868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.628388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.642425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.651342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.659763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.667142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.675764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.683461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.692014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.719046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.726185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.735085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.743707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.754963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.762590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.771679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.779956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.787668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.799676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.804544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.812421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.827004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.835496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:53.843653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52388","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-10T18:21:35.889799Z","caller":"traceutil/trace.go:172","msg":"trace[2141740107] transaction","detail":"{read_only:false; response_revision:669; number_of_response:1; }","duration":"129.464837ms","start":"2025-10-10T18:21:35.760313Z","end":"2025-10-10T18:21:35.889778Z","steps":["trace[2141740107] 'process raft request'  (duration: 127.542372ms)"],"step_count":1}
	
	
	==> kernel <==
	 18:21:49 up  1:04,  0 user,  load average: 5.81, 4.72, 2.99
	Linux no-preload-556024 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [58578d5735e6c09f8bee7a1bed1c2a6815baa58dec329a977a887f8e583cf301] <==
	I1010 18:20:56.052455       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1010 18:20:56.052723       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1010 18:20:56.052858       1 main.go:148] setting mtu 1500 for CNI 
	I1010 18:20:56.052873       1 main.go:178] kindnetd IP family: "ipv4"
	I1010 18:20:56.052890       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-10T18:20:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1010 18:20:56.351579       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1010 18:20:56.351616       1 controller.go:381] "Waiting for informer caches to sync"
	I1010 18:20:56.351628       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1010 18:20:56.352765       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1010 18:20:56.751746       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1010 18:20:56.751997       1 metrics.go:72] Registering metrics
	I1010 18:20:56.752176       1 controller.go:711] "Syncing nftables rules"
	I1010 18:21:06.351255       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1010 18:21:06.351335       1 main.go:301] handling current node
	I1010 18:21:16.353564       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1010 18:21:16.353604       1 main.go:301] handling current node
	I1010 18:21:26.352123       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1010 18:21:26.352159       1 main.go:301] handling current node
	I1010 18:21:36.356142       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1010 18:21:36.356223       1 main.go:301] handling current node
	I1010 18:21:46.360158       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1010 18:21:46.360202       1 main.go:301] handling current node
	
	
	==> kube-apiserver [624948aa983f6a950a5a86e99ebbf4e3cec99b2849460ed697524b3fc4ffac05] <==
	I1010 18:20:54.529463       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1010 18:20:54.529543       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1010 18:20:54.530452       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1010 18:20:54.530564       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1010 18:20:54.530622       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1010 18:20:54.530579       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1010 18:20:54.531818       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1010 18:20:54.531884       1 aggregator.go:171] initial CRD sync complete...
	I1010 18:20:54.531917       1 autoregister_controller.go:144] Starting autoregister controller
	I1010 18:20:54.531924       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1010 18:20:54.531931       1 cache.go:39] Caches are synced for autoregister controller
	I1010 18:20:54.530591       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1010 18:20:54.536297       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1010 18:20:54.582822       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1010 18:20:54.906855       1 controller.go:667] quota admission added evaluator for: namespaces
	I1010 18:20:54.953780       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1010 18:20:54.992716       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1010 18:20:55.001496       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1010 18:20:55.008893       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1010 18:20:55.052037       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.94.107"}
	I1010 18:20:55.078220       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.23.206"}
	I1010 18:20:55.434262       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1010 18:20:57.890245       1 controller.go:667] quota admission added evaluator for: endpoints
	I1010 18:20:58.246414       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1010 18:20:58.439689       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [579953ecaa5c709ae190ac505c57c31de755d4d689b3be28199b4f18c038f574] <==
	I1010 18:20:57.888242       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1010 18:20:57.888527       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1010 18:20:57.888717       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1010 18:20:57.891575       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1010 18:20:57.892460       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1010 18:20:57.894544       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1010 18:20:57.894620       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1010 18:20:57.899104       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1010 18:20:57.901377       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1010 18:20:57.901506       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1010 18:20:57.910776       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1010 18:20:57.913923       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1010 18:20:57.914036       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1010 18:20:57.914208       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-556024"
	I1010 18:20:57.914275       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1010 18:20:57.917896       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1010 18:20:57.921271       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1010 18:20:57.924512       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1010 18:20:57.926779       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1010 18:20:57.929088       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1010 18:20:57.931399       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1010 18:20:57.937300       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1010 18:20:57.937369       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1010 18:20:57.937387       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1010 18:20:57.937458       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	
	
	==> kube-proxy [ded19ae952b01a25d91e7233536d7b2a7e1abc59c551437700353661b7888410] <==
	I1010 18:20:56.006481       1 server_linux.go:53] "Using iptables proxy"
	I1010 18:20:56.065092       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1010 18:20:56.166356       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1010 18:20:56.166412       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1010 18:20:56.166513       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1010 18:20:56.191836       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1010 18:20:56.191911       1 server_linux.go:132] "Using iptables Proxier"
	I1010 18:20:56.198341       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1010 18:20:56.198839       1 server.go:527] "Version info" version="v1.34.1"
	I1010 18:20:56.198915       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 18:20:56.206152       1 config.go:403] "Starting serviceCIDR config controller"
	I1010 18:20:56.206176       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1010 18:20:56.206184       1 config.go:309] "Starting node config controller"
	I1010 18:20:56.206199       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1010 18:20:56.206207       1 config.go:200] "Starting service config controller"
	I1010 18:20:56.206213       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1010 18:20:56.206229       1 config.go:106] "Starting endpoint slice config controller"
	I1010 18:20:56.206234       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1010 18:20:56.306856       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1010 18:20:56.306983       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1010 18:20:56.306986       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1010 18:20:56.307029       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [f690c75f2865bf33ee267a92d360114ddc8d677ee96e0e894aa2e4d900fd9adf] <==
	I1010 18:20:51.679920       1 serving.go:386] Generated self-signed cert in-memory
	W1010 18:20:54.449453       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1010 18:20:54.449561       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1010 18:20:54.449596       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1010 18:20:54.449628       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1010 18:20:54.498339       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1010 18:20:54.499561       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 18:20:54.504850       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1010 18:20:54.504962       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1010 18:20:54.505467       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1010 18:20:54.507127       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1010 18:20:54.605690       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 10 18:20:58 no-preload-556024 kubelet[701]: I1010 18:20:58.495222     701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2209ed8b-b88a-45f4-a57a-36decaa54d79-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-75n29\" (UID: \"2209ed8b-b88a-45f4-a57a-36decaa54d79\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-75n29"
	Oct 10 18:20:58 no-preload-556024 kubelet[701]: I1010 18:20:58.495247     701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqt9x\" (UniqueName: \"kubernetes.io/projected/2209ed8b-b88a-45f4-a57a-36decaa54d79-kube-api-access-wqt9x\") pod \"kubernetes-dashboard-855c9754f9-75n29\" (UID: \"2209ed8b-b88a-45f4-a57a-36decaa54d79\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-75n29"
	Oct 10 18:21:00 no-preload-556024 kubelet[701]: I1010 18:21:00.614672     701 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 10 18:21:01 no-preload-556024 kubelet[701]: I1010 18:21:01.537613     701 scope.go:117] "RemoveContainer" containerID="8bd2d295cec21f02ffe4e195db323b302dc12842744232f9e87648aa06f4bce2"
	Oct 10 18:21:02 no-preload-556024 kubelet[701]: I1010 18:21:02.546683     701 scope.go:117] "RemoveContainer" containerID="c208dd7c502b680f922ae34d5a7fabb1f0db3bb1cfdd0d5f8b721f4e24e5fb89"
	Oct 10 18:21:02 no-preload-556024 kubelet[701]: I1010 18:21:02.547324     701 scope.go:117] "RemoveContainer" containerID="8bd2d295cec21f02ffe4e195db323b302dc12842744232f9e87648aa06f4bce2"
	Oct 10 18:21:02 no-preload-556024 kubelet[701]: E1010 18:21:02.548007     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-trwt5_kubernetes-dashboard(e4ce8751-5cd9-47b8-8093-bdcd167eabac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-trwt5" podUID="e4ce8751-5cd9-47b8-8093-bdcd167eabac"
	Oct 10 18:21:03 no-preload-556024 kubelet[701]: I1010 18:21:03.551001     701 scope.go:117] "RemoveContainer" containerID="c208dd7c502b680f922ae34d5a7fabb1f0db3bb1cfdd0d5f8b721f4e24e5fb89"
	Oct 10 18:21:03 no-preload-556024 kubelet[701]: E1010 18:21:03.551278     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-trwt5_kubernetes-dashboard(e4ce8751-5cd9-47b8-8093-bdcd167eabac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-trwt5" podUID="e4ce8751-5cd9-47b8-8093-bdcd167eabac"
	Oct 10 18:21:05 no-preload-556024 kubelet[701]: I1010 18:21:05.906309     701 scope.go:117] "RemoveContainer" containerID="c208dd7c502b680f922ae34d5a7fabb1f0db3bb1cfdd0d5f8b721f4e24e5fb89"
	Oct 10 18:21:05 no-preload-556024 kubelet[701]: E1010 18:21:05.906493     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-trwt5_kubernetes-dashboard(e4ce8751-5cd9-47b8-8093-bdcd167eabac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-trwt5" podUID="e4ce8751-5cd9-47b8-8093-bdcd167eabac"
	Oct 10 18:21:06 no-preload-556024 kubelet[701]: I1010 18:21:06.569804     701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-75n29" podStartSLOduration=1.367649614 podStartE2EDuration="8.569780413s" podCreationTimestamp="2025-10-10 18:20:58 +0000 UTC" firstStartedPulling="2025-10-10 18:20:58.691517188 +0000 UTC m=+8.350659870" lastFinishedPulling="2025-10-10 18:21:05.89364797 +0000 UTC m=+15.552790669" observedRunningTime="2025-10-10 18:21:06.569538451 +0000 UTC m=+16.228681152" watchObservedRunningTime="2025-10-10 18:21:06.569780413 +0000 UTC m=+16.228923114"
	Oct 10 18:21:19 no-preload-556024 kubelet[701]: I1010 18:21:19.465678     701 scope.go:117] "RemoveContainer" containerID="c208dd7c502b680f922ae34d5a7fabb1f0db3bb1cfdd0d5f8b721f4e24e5fb89"
	Oct 10 18:21:19 no-preload-556024 kubelet[701]: I1010 18:21:19.593281     701 scope.go:117] "RemoveContainer" containerID="c208dd7c502b680f922ae34d5a7fabb1f0db3bb1cfdd0d5f8b721f4e24e5fb89"
	Oct 10 18:21:19 no-preload-556024 kubelet[701]: I1010 18:21:19.593516     701 scope.go:117] "RemoveContainer" containerID="2a18fa8993b6454a243ebedac42429e502364ba0ed77ebf8041dcadcd9e5da7a"
	Oct 10 18:21:19 no-preload-556024 kubelet[701]: E1010 18:21:19.593727     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-trwt5_kubernetes-dashboard(e4ce8751-5cd9-47b8-8093-bdcd167eabac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-trwt5" podUID="e4ce8751-5cd9-47b8-8093-bdcd167eabac"
	Oct 10 18:21:25 no-preload-556024 kubelet[701]: I1010 18:21:25.907154     701 scope.go:117] "RemoveContainer" containerID="2a18fa8993b6454a243ebedac42429e502364ba0ed77ebf8041dcadcd9e5da7a"
	Oct 10 18:21:25 no-preload-556024 kubelet[701]: E1010 18:21:25.907338     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-trwt5_kubernetes-dashboard(e4ce8751-5cd9-47b8-8093-bdcd167eabac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-trwt5" podUID="e4ce8751-5cd9-47b8-8093-bdcd167eabac"
	Oct 10 18:21:26 no-preload-556024 kubelet[701]: I1010 18:21:26.612732     701 scope.go:117] "RemoveContainer" containerID="7da7c710c0c97e371e285306921a08629d485643a3d7a010a63878a9e851b4ff"
	Oct 10 18:21:36 no-preload-556024 kubelet[701]: I1010 18:21:36.465986     701 scope.go:117] "RemoveContainer" containerID="2a18fa8993b6454a243ebedac42429e502364ba0ed77ebf8041dcadcd9e5da7a"
	Oct 10 18:21:36 no-preload-556024 kubelet[701]: E1010 18:21:36.466256     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-trwt5_kubernetes-dashboard(e4ce8751-5cd9-47b8-8093-bdcd167eabac)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-trwt5" podUID="e4ce8751-5cd9-47b8-8093-bdcd167eabac"
	Oct 10 18:21:44 no-preload-556024 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 10 18:21:44 no-preload-556024 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 10 18:21:44 no-preload-556024 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 10 18:21:44 no-preload-556024 systemd[1]: kubelet.service: Consumed 1.702s CPU time.
	
	
	==> kubernetes-dashboard [e0dd2d726bc067123461686a973a1bca5f3036eb38199d551b8302751e01c850] <==
	2025/10/10 18:21:05 Starting overwatch
	2025/10/10 18:21:05 Using namespace: kubernetes-dashboard
	2025/10/10 18:21:05 Using in-cluster config to connect to apiserver
	2025/10/10 18:21:05 Using secret token for csrf signing
	2025/10/10 18:21:05 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/10 18:21:05 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/10 18:21:05 Successful initial request to the apiserver, version: v1.34.1
	2025/10/10 18:21:05 Generating JWE encryption key
	2025/10/10 18:21:05 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/10 18:21:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/10 18:21:06 Initializing JWE encryption key from synchronized object
	2025/10/10 18:21:06 Creating in-cluster Sidecar client
	2025/10/10 18:21:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/10 18:21:06 Serving insecurely on HTTP port: 9090
	2025/10/10 18:21:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [7da7c710c0c97e371e285306921a08629d485643a3d7a010a63878a9e851b4ff] <==
	I1010 18:20:55.941717       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1010 18:21:25.946444       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [881de1435156942e8e9fe01d8027baa3c3ac0ed457aa689f7625ac0b503df981] <==
	I1010 18:21:26.673854       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1010 18:21:26.682163       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1010 18:21:26.682210       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1010 18:21:26.685013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:30.140562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:34.400926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:37.999245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:41.053925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:44.079252       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:44.094198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1010 18:21:44.094969       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1010 18:21:44.095253       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-556024_24cc3ef3-641a-48a7-a62b-899ab2362c20!
	I1010 18:21:44.095280       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"239ef5d2-e469-4829-842f-94522e30a190", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-556024_24cc3ef3-641a-48a7-a62b-899ab2362c20 became leader
	W1010 18:21:44.103130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:44.125014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1010 18:21:44.195545       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-556024_24cc3ef3-641a-48a7-a62b-899ab2362c20!
	W1010 18:21:46.128265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:46.131819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:48.136206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:48.140663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-556024 -n no-preload-556024
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-556024 -n no-preload-556024: exit status 2 (377.141634ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-556024 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-472518 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-472518 --alsologtostderr -v=1: exit status 80 (2.383898656s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-472518 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 18:21:46.574023  329602 out.go:360] Setting OutFile to fd 1 ...
	I1010 18:21:46.574299  329602 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:21:46.574309  329602 out.go:374] Setting ErrFile to fd 2...
	I1010 18:21:46.574313  329602 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:21:46.574525  329602 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 18:21:46.574811  329602 out.go:368] Setting JSON to false
	I1010 18:21:46.574849  329602 mustload.go:65] Loading cluster: embed-certs-472518
	I1010 18:21:46.575327  329602 config.go:182] Loaded profile config "embed-certs-472518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:21:46.575768  329602 cli_runner.go:164] Run: docker container inspect embed-certs-472518 --format={{.State.Status}}
	I1010 18:21:46.599319  329602 host.go:66] Checking if "embed-certs-472518" exists ...
	I1010 18:21:46.599701  329602 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:21:46.671805  329602 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-10 18:21:46.657953986 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:21:46.672869  329602 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-472518 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1010 18:21:46.675516  329602 out.go:179] * Pausing node embed-certs-472518 ... 
	I1010 18:21:46.676644  329602 host.go:66] Checking if "embed-certs-472518" exists ...
	I1010 18:21:46.676882  329602 ssh_runner.go:195] Run: systemctl --version
	I1010 18:21:46.676937  329602 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-472518
	I1010 18:21:46.696799  329602 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/embed-certs-472518/id_rsa Username:docker}
	I1010 18:21:46.800845  329602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:21:46.818119  329602 pause.go:52] kubelet running: true
	I1010 18:21:46.818196  329602 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1010 18:21:47.043671  329602 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1010 18:21:47.043768  329602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1010 18:21:47.129364  329602 cri.go:89] found id: "793d0e41cb7aec4a0f299624e039a34166a5a6807a3d1eedf9e3849fcb6c50de"
	I1010 18:21:47.129385  329602 cri.go:89] found id: "ddf22487acac1f44767d6faad43efdcb55e126e4d543b64497d3614254c5e0d5"
	I1010 18:21:47.129390  329602 cri.go:89] found id: "f6b933b2408d071de401c79bb1ddb49b0541cd7149531813e62215ccc3e7bf16"
	I1010 18:21:47.129395  329602 cri.go:89] found id: "106735404cace4f8939a0b5039e3d3506588ed35258591de2b5d9b775beb2175"
	I1010 18:21:47.129398  329602 cri.go:89] found id: "e0b6d3ae90667b41d0180616bdfecaebc14631771bd3e49defdf3d111b564ad9"
	I1010 18:21:47.129401  329602 cri.go:89] found id: "159136e63b21ef09e85b6efdc6b5a0f5be67f5af9a3516c5f8cae7be0af60846"
	I1010 18:21:47.129404  329602 cri.go:89] found id: "3622c66fa378c4b8614e23f6545ac6151fa6ef096364723cbdd5d22677bc0ca9"
	I1010 18:21:47.129406  329602 cri.go:89] found id: "a5c1be1847d40640048f86d96a7f93b4166d1688a8afd40971231c2b59f73202"
	I1010 18:21:47.129409  329602 cri.go:89] found id: "a52804abc0e7184b8ec037e1a9594b3794f50868b2f90978e95ba4f3dac34818"
	I1010 18:21:47.129424  329602 cri.go:89] found id: "dbb59932f2180fbc26c3cde4e4a30573e097fe9d42db58b7eefe8fcc9da4608b"
	I1010 18:21:47.129428  329602 cri.go:89] found id: "53e86f711eb6d6e029bf1dc5a1c14477be282ed5a7268cc1290a1a04c4d06252"
	I1010 18:21:47.129433  329602 cri.go:89] found id: ""
	I1010 18:21:47.129484  329602 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 18:21:47.144762  329602 retry.go:31] will retry after 159.564675ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:21:47Z" level=error msg="open /run/runc: no such file or directory"
	I1010 18:21:47.305191  329602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:21:47.319826  329602 pause.go:52] kubelet running: false
	I1010 18:21:47.319889  329602 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1010 18:21:47.491407  329602 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1010 18:21:47.491536  329602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1010 18:21:47.588371  329602 cri.go:89] found id: "793d0e41cb7aec4a0f299624e039a34166a5a6807a3d1eedf9e3849fcb6c50de"
	I1010 18:21:47.588403  329602 cri.go:89] found id: "ddf22487acac1f44767d6faad43efdcb55e126e4d543b64497d3614254c5e0d5"
	I1010 18:21:47.588409  329602 cri.go:89] found id: "f6b933b2408d071de401c79bb1ddb49b0541cd7149531813e62215ccc3e7bf16"
	I1010 18:21:47.588414  329602 cri.go:89] found id: "106735404cace4f8939a0b5039e3d3506588ed35258591de2b5d9b775beb2175"
	I1010 18:21:47.588418  329602 cri.go:89] found id: "e0b6d3ae90667b41d0180616bdfecaebc14631771bd3e49defdf3d111b564ad9"
	I1010 18:21:47.588423  329602 cri.go:89] found id: "159136e63b21ef09e85b6efdc6b5a0f5be67f5af9a3516c5f8cae7be0af60846"
	I1010 18:21:47.588426  329602 cri.go:89] found id: "3622c66fa378c4b8614e23f6545ac6151fa6ef096364723cbdd5d22677bc0ca9"
	I1010 18:21:47.588430  329602 cri.go:89] found id: "a5c1be1847d40640048f86d96a7f93b4166d1688a8afd40971231c2b59f73202"
	I1010 18:21:47.588444  329602 cri.go:89] found id: "a52804abc0e7184b8ec037e1a9594b3794f50868b2f90978e95ba4f3dac34818"
	I1010 18:21:47.588451  329602 cri.go:89] found id: "dbb59932f2180fbc26c3cde4e4a30573e097fe9d42db58b7eefe8fcc9da4608b"
	I1010 18:21:47.588455  329602 cri.go:89] found id: "53e86f711eb6d6e029bf1dc5a1c14477be282ed5a7268cc1290a1a04c4d06252"
	I1010 18:21:47.588459  329602 cri.go:89] found id: ""
	I1010 18:21:47.588516  329602 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 18:21:47.606756  329602 retry.go:31] will retry after 312.40682ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:21:47Z" level=error msg="open /run/runc: no such file or directory"
	I1010 18:21:47.920308  329602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:21:47.935646  329602 pause.go:52] kubelet running: false
	I1010 18:21:47.935712  329602 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1010 18:21:48.104280  329602 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1010 18:21:48.104351  329602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1010 18:21:48.185533  329602 cri.go:89] found id: "793d0e41cb7aec4a0f299624e039a34166a5a6807a3d1eedf9e3849fcb6c50de"
	I1010 18:21:48.185558  329602 cri.go:89] found id: "ddf22487acac1f44767d6faad43efdcb55e126e4d543b64497d3614254c5e0d5"
	I1010 18:21:48.185564  329602 cri.go:89] found id: "f6b933b2408d071de401c79bb1ddb49b0541cd7149531813e62215ccc3e7bf16"
	I1010 18:21:48.185568  329602 cri.go:89] found id: "106735404cace4f8939a0b5039e3d3506588ed35258591de2b5d9b775beb2175"
	I1010 18:21:48.185572  329602 cri.go:89] found id: "e0b6d3ae90667b41d0180616bdfecaebc14631771bd3e49defdf3d111b564ad9"
	I1010 18:21:48.185577  329602 cri.go:89] found id: "159136e63b21ef09e85b6efdc6b5a0f5be67f5af9a3516c5f8cae7be0af60846"
	I1010 18:21:48.185580  329602 cri.go:89] found id: "3622c66fa378c4b8614e23f6545ac6151fa6ef096364723cbdd5d22677bc0ca9"
	I1010 18:21:48.185584  329602 cri.go:89] found id: "a5c1be1847d40640048f86d96a7f93b4166d1688a8afd40971231c2b59f73202"
	I1010 18:21:48.185588  329602 cri.go:89] found id: "a52804abc0e7184b8ec037e1a9594b3794f50868b2f90978e95ba4f3dac34818"
	I1010 18:21:48.185595  329602 cri.go:89] found id: "dbb59932f2180fbc26c3cde4e4a30573e097fe9d42db58b7eefe8fcc9da4608b"
	I1010 18:21:48.185599  329602 cri.go:89] found id: "53e86f711eb6d6e029bf1dc5a1c14477be282ed5a7268cc1290a1a04c4d06252"
	I1010 18:21:48.185602  329602 cri.go:89] found id: ""
	I1010 18:21:48.185646  329602 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 18:21:48.198816  329602 retry.go:31] will retry after 411.082424ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:21:48Z" level=error msg="open /run/runc: no such file or directory"
	I1010 18:21:48.610376  329602 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:21:48.624432  329602 pause.go:52] kubelet running: false
	I1010 18:21:48.624481  329602 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1010 18:21:48.783209  329602 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1010 18:21:48.783283  329602 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1010 18:21:48.876437  329602 cri.go:89] found id: "793d0e41cb7aec4a0f299624e039a34166a5a6807a3d1eedf9e3849fcb6c50de"
	I1010 18:21:48.876466  329602 cri.go:89] found id: "ddf22487acac1f44767d6faad43efdcb55e126e4d543b64497d3614254c5e0d5"
	I1010 18:21:48.876473  329602 cri.go:89] found id: "f6b933b2408d071de401c79bb1ddb49b0541cd7149531813e62215ccc3e7bf16"
	I1010 18:21:48.876478  329602 cri.go:89] found id: "106735404cace4f8939a0b5039e3d3506588ed35258591de2b5d9b775beb2175"
	I1010 18:21:48.876483  329602 cri.go:89] found id: "e0b6d3ae90667b41d0180616bdfecaebc14631771bd3e49defdf3d111b564ad9"
	I1010 18:21:48.876489  329602 cri.go:89] found id: "159136e63b21ef09e85b6efdc6b5a0f5be67f5af9a3516c5f8cae7be0af60846"
	I1010 18:21:48.876494  329602 cri.go:89] found id: "3622c66fa378c4b8614e23f6545ac6151fa6ef096364723cbdd5d22677bc0ca9"
	I1010 18:21:48.876498  329602 cri.go:89] found id: "a5c1be1847d40640048f86d96a7f93b4166d1688a8afd40971231c2b59f73202"
	I1010 18:21:48.876502  329602 cri.go:89] found id: "a52804abc0e7184b8ec037e1a9594b3794f50868b2f90978e95ba4f3dac34818"
	I1010 18:21:48.876511  329602 cri.go:89] found id: "dbb59932f2180fbc26c3cde4e4a30573e097fe9d42db58b7eefe8fcc9da4608b"
	I1010 18:21:48.876515  329602 cri.go:89] found id: "53e86f711eb6d6e029bf1dc5a1c14477be282ed5a7268cc1290a1a04c4d06252"
	I1010 18:21:48.876519  329602 cri.go:89] found id: ""
	I1010 18:21:48.876563  329602 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 18:21:48.892535  329602 out.go:203] 
	W1010 18:21:48.894121  329602 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:21:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:21:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1010 18:21:48.894142  329602 out.go:285] * 
	* 
	W1010 18:21:48.898323  329602 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 18:21:48.899629  329602 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-472518 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-472518
helpers_test.go:243: (dbg) docker inspect embed-certs-472518:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2e7bf16e9ebb73fcfd92fb1e6d8f20354619815c38b49f886ca33e7e71b2139e",
	        "Created": "2025-10-10T18:19:36.31646399Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 315445,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-10T18:20:42.102857084Z",
	            "FinishedAt": "2025-10-10T18:20:41.266629383Z"
	        },
	        "Image": "sha256:84da1fc78d37190122f56c520913b0bfc454516bc5fdbdc209e2a5258afce8c3",
	        "ResolvConfPath": "/var/lib/docker/containers/2e7bf16e9ebb73fcfd92fb1e6d8f20354619815c38b49f886ca33e7e71b2139e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2e7bf16e9ebb73fcfd92fb1e6d8f20354619815c38b49f886ca33e7e71b2139e/hostname",
	        "HostsPath": "/var/lib/docker/containers/2e7bf16e9ebb73fcfd92fb1e6d8f20354619815c38b49f886ca33e7e71b2139e/hosts",
	        "LogPath": "/var/lib/docker/containers/2e7bf16e9ebb73fcfd92fb1e6d8f20354619815c38b49f886ca33e7e71b2139e/2e7bf16e9ebb73fcfd92fb1e6d8f20354619815c38b49f886ca33e7e71b2139e-json.log",
	        "Name": "/embed-certs-472518",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-472518:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-472518",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2e7bf16e9ebb73fcfd92fb1e6d8f20354619815c38b49f886ca33e7e71b2139e",
	                "LowerDir": "/var/lib/docker/overlay2/5fa96fd4ec73d503d3d3528d8c7b13f7ca1b0a64ecf18291fa642aa2e0a2033a-init/diff:/var/lib/docker/overlay2/9995a0af7efc4d83e8e62526a6cf13ffc5df3bab5cee59077c863040f7e3e58d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5fa96fd4ec73d503d3d3528d8c7b13f7ca1b0a64ecf18291fa642aa2e0a2033a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5fa96fd4ec73d503d3d3528d8c7b13f7ca1b0a64ecf18291fa642aa2e0a2033a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5fa96fd4ec73d503d3d3528d8c7b13f7ca1b0a64ecf18291fa642aa2e0a2033a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-472518",
	                "Source": "/var/lib/docker/volumes/embed-certs-472518/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-472518",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-472518",
	                "name.minikube.sigs.k8s.io": "embed-certs-472518",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1eaba29d650e742cb0aa0d02a484531c40045eacff0ab67a86619c74f99ba3af",
	            "SandboxKey": "/var/run/docker/netns/1eaba29d650e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-472518": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:0b:88:2b:88:2c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cbce2d732620a5010a9bb6fa38f48aa0b3fba945ed0c5927e2d54406158c8a77",
	                    "EndpointID": "49bf35f7d183b3ea09fba66178faaeb753d4bd17df51c224dd76e667fe1ba4f4",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-472518",
	                        "2e7bf16e9ebb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-472518 -n embed-certs-472518
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-472518 -n embed-certs-472518: exit status 2 (395.627123ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-472518 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-472518 logs -n 25: (1.393675436s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p bridge-078032                                                                                                                                                                                                                              │ bridge-078032                │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-141193 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p old-k8s-version-141193 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:21 UTC │
	│ stop    │ -p embed-certs-472518 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ addons  │ enable metrics-server -p no-preload-556024 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ delete  │ -p disable-driver-mounts-523797                                                                                                                                                                                                               │ disable-driver-mounts-523797 │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p default-k8s-diff-port-821769 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:21 UTC │
	│ stop    │ -p no-preload-556024 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ addons  │ enable dashboard -p embed-certs-472518 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p embed-certs-472518 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:21 UTC │
	│ addons  │ enable dashboard -p no-preload-556024 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p no-preload-556024 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:21 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-821769 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-821769 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ image   │ old-k8s-version-141193 image list --format=json                                                                                                                                                                                               │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ pause   │ -p old-k8s-version-141193 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ delete  │ -p old-k8s-version-141193                                                                                                                                                                                                                     │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ delete  │ -p old-k8s-version-141193                                                                                                                                                                                                                     │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ start   │ -p newest-cni-121129 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-821769 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ start   │ -p default-k8s-diff-port-821769 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ image   │ no-preload-556024 image list --format=json                                                                                                                                                                                                    │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ pause   │ -p no-preload-556024 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ image   │ embed-certs-472518 image list --format=json                                                                                                                                                                                                   │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ pause   │ -p embed-certs-472518 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/10 18:21:36
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 18:21:36.443972  325699 out.go:360] Setting OutFile to fd 1 ...
	I1010 18:21:36.444232  325699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:21:36.444242  325699 out.go:374] Setting ErrFile to fd 2...
	I1010 18:21:36.444246  325699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:21:36.444423  325699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 18:21:36.444868  325699 out.go:368] Setting JSON to false
	I1010 18:21:36.445989  325699 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3836,"bootTime":1760116660,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 18:21:36.446111  325699 start.go:141] virtualization: kvm guest
	I1010 18:21:36.447655  325699 out.go:179] * [default-k8s-diff-port-821769] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1010 18:21:36.451745  325699 out.go:179]   - MINIKUBE_LOCATION=21724
	I1010 18:21:36.451794  325699 notify.go:220] Checking for updates...
	I1010 18:21:36.453782  325699 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 18:21:36.454903  325699 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:21:36.456168  325699 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-5815/.minikube
	I1010 18:21:36.457303  325699 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 18:21:36.458541  325699 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 18:21:36.460107  325699 config.go:182] Loaded profile config "default-k8s-diff-port-821769": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:21:36.460644  325699 driver.go:421] Setting default libvirt URI to qemu:///system
	I1010 18:21:36.487553  325699 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1010 18:21:36.487706  325699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:21:36.548644  325699 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:82 SystemTime:2025-10-10 18:21:36.539560881 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:21:36.548787  325699 docker.go:318] overlay module found
	I1010 18:21:36.550878  325699 out.go:179] * Using the docker driver based on existing profile
	I1010 18:21:31.750233  324649 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1010 18:21:31.750529  324649 start.go:159] libmachine.API.Create for "newest-cni-121129" (driver="docker")
	I1010 18:21:31.750565  324649 client.go:168] LocalClient.Create starting
	I1010 18:21:31.750670  324649 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem
	I1010 18:21:31.750723  324649 main.go:141] libmachine: Decoding PEM data...
	I1010 18:21:31.750746  324649 main.go:141] libmachine: Parsing certificate...
	I1010 18:21:31.750822  324649 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem
	I1010 18:21:31.750849  324649 main.go:141] libmachine: Decoding PEM data...
	I1010 18:21:31.750864  324649 main.go:141] libmachine: Parsing certificate...
	I1010 18:21:31.751250  324649 cli_runner.go:164] Run: docker network inspect newest-cni-121129 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1010 18:21:31.769180  324649 cli_runner.go:211] docker network inspect newest-cni-121129 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1010 18:21:31.769299  324649 network_create.go:284] running [docker network inspect newest-cni-121129] to gather additional debugging logs...
	I1010 18:21:31.769325  324649 cli_runner.go:164] Run: docker network inspect newest-cni-121129
	W1010 18:21:31.785789  324649 cli_runner.go:211] docker network inspect newest-cni-121129 returned with exit code 1
	I1010 18:21:31.785839  324649 network_create.go:287] error running [docker network inspect newest-cni-121129]: docker network inspect newest-cni-121129: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-121129 not found
	I1010 18:21:31.785860  324649 network_create.go:289] output of [docker network inspect newest-cni-121129]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-121129 not found
	
	** /stderr **
	I1010 18:21:31.785985  324649 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 18:21:31.803517  324649 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3f8fb0c8a54c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1a:51:a2:ab:ca:d6} reservation:<nil>}
	I1010 18:21:31.804204  324649 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bdbbffbd65c1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:de:11:33:77:48:20} reservation:<nil>}
	I1010 18:21:31.804907  324649 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0b6a5dab2001 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:93:a5:d3:c3:8f} reservation:<nil>}
	I1010 18:21:31.805493  324649 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-62177a68d9eb IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5e:70:f2:a2:da:00} reservation:<nil>}
	I1010 18:21:31.806333  324649 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f75590}
	I1010 18:21:31.806360  324649 network_create.go:124] attempt to create docker network newest-cni-121129 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1010 18:21:31.806398  324649 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-121129 newest-cni-121129
	I1010 18:21:31.865994  324649 network_create.go:108] docker network newest-cni-121129 192.168.85.0/24 created
	I1010 18:21:31.866029  324649 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-121129" container
	I1010 18:21:31.866140  324649 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1010 18:21:31.883599  324649 cli_runner.go:164] Run: docker volume create newest-cni-121129 --label name.minikube.sigs.k8s.io=newest-cni-121129 --label created_by.minikube.sigs.k8s.io=true
	I1010 18:21:31.901755  324649 oci.go:103] Successfully created a docker volume newest-cni-121129
	I1010 18:21:31.901834  324649 cli_runner.go:164] Run: docker run --rm --name newest-cni-121129-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-121129 --entrypoint /usr/bin/test -v newest-cni-121129:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 -d /var/lib
	I1010 18:21:32.316917  324649 oci.go:107] Successfully prepared a docker volume newest-cni-121129
	I1010 18:21:32.316960  324649 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:21:32.316979  324649 kic.go:194] Starting extracting preloaded images to volume ...
	I1010 18:21:32.317041  324649 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-121129:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1010 18:21:36.215225  324649 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-121129:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 -I lz4 -xf /preloaded.tar -C /extractDir: (3.898129423s)
	I1010 18:21:36.215274  324649 kic.go:203] duration metric: took 3.898290657s to extract preloaded images to volume ...
	W1010 18:21:36.215394  324649 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1010 18:21:36.215437  324649 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1010 18:21:36.215483  324649 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1010 18:21:36.276319  324649 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-121129 --name newest-cni-121129 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-121129 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-121129 --network newest-cni-121129 --ip 192.168.85.2 --volume newest-cni-121129:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6
	I1010 18:21:36.552156  325699 start.go:305] selected driver: docker
	I1010 18:21:36.552182  325699 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-821769 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-821769 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:21:36.552263  325699 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 18:21:36.552888  325699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:21:36.619123  325699 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-10 18:21:36.608354336 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:21:36.619511  325699 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:21:36.619549  325699 cni.go:84] Creating CNI manager for ""
	I1010 18:21:36.619602  325699 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:21:36.619655  325699 start.go:349] cluster config:
	{Name:default-k8s-diff-port-821769 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-821769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:21:36.621174  325699 out.go:179] * Starting "default-k8s-diff-port-821769" primary control-plane node in "default-k8s-diff-port-821769" cluster
	I1010 18:21:36.623163  325699 cache.go:123] Beginning downloading kic base image for docker with crio
	I1010 18:21:36.624439  325699 out.go:179] * Pulling base image v0.0.48-1760103811-21724 ...
	I1010 18:21:36.625488  325699 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:21:36.625524  325699 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1010 18:21:36.625536  325699 cache.go:58] Caching tarball of preloaded images
	I1010 18:21:36.625602  325699 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon
	I1010 18:21:36.625620  325699 preload.go:233] Found /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 18:21:36.625631  325699 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1010 18:21:36.625748  325699 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/config.json ...
	I1010 18:21:36.646734  325699 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon, skipping pull
	I1010 18:21:36.646759  325699 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 exists in daemon, skipping load
	I1010 18:21:36.646779  325699 cache.go:232] Successfully downloaded all kic artifacts
	I1010 18:21:36.646809  325699 start.go:360] acquireMachinesLock for default-k8s-diff-port-821769: {Name:mk32364aa6b9096e7aa0195f0d450a3e04b4f6f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:21:36.646879  325699 start.go:364] duration metric: took 45.359µs to acquireMachinesLock for "default-k8s-diff-port-821769"
	I1010 18:21:36.646912  325699 start.go:96] Skipping create...Using existing machine configuration
	I1010 18:21:36.646922  325699 fix.go:54] fixHost starting: 
	I1010 18:21:36.647229  325699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:21:36.665115  325699 fix.go:112] recreateIfNeeded on default-k8s-diff-port-821769: state=Stopped err=<nil>
	W1010 18:21:36.665142  325699 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 18:21:36.566005  324649 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Running}}
	I1010 18:21:36.587637  324649 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:21:36.609439  324649 cli_runner.go:164] Run: docker exec newest-cni-121129 stat /var/lib/dpkg/alternatives/iptables
	I1010 18:21:36.654885  324649 oci.go:144] the created container "newest-cni-121129" has a running status.
	I1010 18:21:36.654911  324649 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa...
	I1010 18:21:37.150404  324649 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1010 18:21:37.181411  324649 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:21:37.202450  324649 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1010 18:21:37.202483  324649 kic_runner.go:114] Args: [docker exec --privileged newest-cni-121129 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1010 18:21:37.249728  324649 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:21:37.274026  324649 machine.go:93] provisionDockerMachine start ...
	I1010 18:21:37.274139  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:37.295767  324649 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:37.296119  324649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1010 18:21:37.296140  324649 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 18:21:37.433206  324649 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-121129
	
	I1010 18:21:37.433232  324649 ubuntu.go:182] provisioning hostname "newest-cni-121129"
	I1010 18:21:37.433293  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:37.451228  324649 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:37.451497  324649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1010 18:21:37.451516  324649 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-121129 && echo "newest-cni-121129" | sudo tee /etc/hostname
	I1010 18:21:37.593295  324649 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-121129
	
	I1010 18:21:37.593411  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:37.611384  324649 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:37.611592  324649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1010 18:21:37.611611  324649 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-121129' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-121129/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-121129' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:21:37.744646  324649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:21:37.744678  324649 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-5815/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-5815/.minikube}
	I1010 18:21:37.744702  324649 ubuntu.go:190] setting up certificates
	I1010 18:21:37.744714  324649 provision.go:84] configureAuth start
	I1010 18:21:37.744775  324649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-121129
	I1010 18:21:37.762585  324649 provision.go:143] copyHostCerts
	I1010 18:21:37.762636  324649 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem, removing ...
	I1010 18:21:37.762644  324649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem
	I1010 18:21:37.762711  324649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem (1082 bytes)
	I1010 18:21:37.762804  324649 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem, removing ...
	I1010 18:21:37.762812  324649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem
	I1010 18:21:37.762837  324649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem (1123 bytes)
	I1010 18:21:37.762889  324649 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem, removing ...
	I1010 18:21:37.762896  324649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem
	I1010 18:21:37.762918  324649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem (1675 bytes)
	I1010 18:21:37.762968  324649 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem org=jenkins.newest-cni-121129 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-121129]
	I1010 18:21:38.017732  324649 provision.go:177] copyRemoteCerts
	I1010 18:21:38.017792  324649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:21:38.017828  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:38.035754  324649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:21:38.135582  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1010 18:21:38.158372  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1010 18:21:38.177887  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 18:21:38.197335  324649 provision.go:87] duration metric: took 452.609625ms to configureAuth
	I1010 18:21:38.197361  324649 ubuntu.go:206] setting minikube options for container-runtime
	I1010 18:21:38.197520  324649 config.go:182] Loaded profile config "newest-cni-121129": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:21:38.197616  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:38.215693  324649 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:38.215929  324649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1010 18:21:38.215945  324649 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:21:38.487590  324649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:21:38.487615  324649 machine.go:96] duration metric: took 1.213566349s to provisionDockerMachine
	I1010 18:21:38.487627  324649 client.go:171] duration metric: took 6.737054602s to LocalClient.Create
	I1010 18:21:38.487644  324649 start.go:167] duration metric: took 6.737116946s to libmachine.API.Create "newest-cni-121129"
	I1010 18:21:38.487653  324649 start.go:293] postStartSetup for "newest-cni-121129" (driver="docker")
	I1010 18:21:38.487667  324649 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:21:38.487718  324649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:21:38.487755  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:38.505301  324649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:21:38.604755  324649 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:21:38.608251  324649 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1010 18:21:38.608275  324649 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1010 18:21:38.608284  324649 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/addons for local assets ...
	I1010 18:21:38.608338  324649 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/files for local assets ...
	I1010 18:21:38.608407  324649 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem -> 93542.pem in /etc/ssl/certs
	I1010 18:21:38.608505  324649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:21:38.617071  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:21:38.639238  324649 start.go:296] duration metric: took 151.569017ms for postStartSetup
	I1010 18:21:38.639632  324649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-121129
	I1010 18:21:38.658650  324649 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/config.json ...
	I1010 18:21:38.658910  324649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1010 18:21:38.658972  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:38.676393  324649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:21:38.770086  324649 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1010 18:21:38.774771  324649 start.go:128] duration metric: took 7.026418609s to createHost
	I1010 18:21:38.774799  324649 start.go:83] releasing machines lock for "newest-cni-121129", held for 7.026572954s
	I1010 18:21:38.774867  324649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-121129
	I1010 18:21:38.794249  324649 ssh_runner.go:195] Run: cat /version.json
	I1010 18:21:38.794292  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:38.794343  324649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:21:38.794395  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:38.812781  324649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:21:38.813044  324649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:21:38.964620  324649 ssh_runner.go:195] Run: systemctl --version
	I1010 18:21:38.971493  324649 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:21:39.008047  324649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:21:39.012702  324649 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:21:39.012768  324649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:21:39.043167  324649 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 18:21:39.043195  324649 start.go:495] detecting cgroup driver to use...
	I1010 18:21:39.043236  324649 detect.go:190] detected "systemd" cgroup driver on host os
	I1010 18:21:39.043275  324649 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:21:39.060424  324649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:21:39.073422  324649 docker.go:218] disabling cri-docker service (if available) ...
	I1010 18:21:39.073477  324649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:21:39.090113  324649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:21:39.108184  324649 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:21:39.193075  324649 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:21:39.284238  324649 docker.go:234] disabling docker service ...
	I1010 18:21:39.284295  324649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:21:39.303174  324649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:21:39.316224  324649 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:21:39.401593  324649 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:21:39.486478  324649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:21:39.499671  324649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:21:39.515336  324649 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1010 18:21:39.515393  324649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.526705  324649 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1010 18:21:39.526768  324649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.536968  324649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.546772  324649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.556927  324649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:21:39.566265  324649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.576240  324649 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.591514  324649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.601231  324649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:21:39.609546  324649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:21:39.617339  324649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:21:39.697520  324649 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:21:39.833447  324649 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:21:39.833510  324649 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:21:39.837650  324649 start.go:563] Will wait 60s for crictl version
	I1010 18:21:39.837706  324649 ssh_runner.go:195] Run: which crictl
	I1010 18:21:39.841778  324649 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1010 18:21:39.866403  324649 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1010 18:21:39.866489  324649 ssh_runner.go:195] Run: crio --version
	I1010 18:21:39.894594  324649 ssh_runner.go:195] Run: crio --version
	I1010 18:21:39.923363  324649 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1010 18:21:39.924491  324649 cli_runner.go:164] Run: docker network inspect newest-cni-121129 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 18:21:39.942921  324649 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1010 18:21:39.947042  324649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:21:39.959308  324649 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1010 18:21:36.669200  325699 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-821769" ...
	I1010 18:21:36.669266  325699 cli_runner.go:164] Run: docker start default-k8s-diff-port-821769
	I1010 18:21:36.950209  325699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:21:36.973712  325699 kic.go:430] container "default-k8s-diff-port-821769" state is running.
	I1010 18:21:36.974205  325699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-821769
	I1010 18:21:36.999384  325699 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/config.json ...
	I1010 18:21:36.999678  325699 machine.go:93] provisionDockerMachine start ...
	I1010 18:21:36.999832  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:37.025140  325699 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:37.025476  325699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1010 18:21:37.025494  325699 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 18:21:37.026335  325699 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37242->127.0.0.1:33128: read: connection reset by peer
	I1010 18:21:40.162873  325699 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-821769
	
	I1010 18:21:40.162901  325699 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-821769"
	I1010 18:21:40.162999  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:40.189150  325699 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:40.189443  325699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1010 18:21:40.189466  325699 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-821769 && echo "default-k8s-diff-port-821769" | sudo tee /etc/hostname
	I1010 18:21:40.331478  325699 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-821769
	
	I1010 18:21:40.331570  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:40.349460  325699 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:40.349752  325699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1010 18:21:40.349789  325699 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-821769' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-821769/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-821769' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:21:40.495960  325699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:21:40.495988  325699 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-5815/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-5815/.minikube}
	I1010 18:21:40.496005  325699 ubuntu.go:190] setting up certificates
	I1010 18:21:40.496013  325699 provision.go:84] configureAuth start
	I1010 18:21:40.496106  325699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-821769
	I1010 18:21:40.515849  325699 provision.go:143] copyHostCerts
	I1010 18:21:40.515918  325699 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem, removing ...
	I1010 18:21:40.515937  325699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem
	I1010 18:21:40.516030  325699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem (1123 bytes)
	I1010 18:21:40.516170  325699 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem, removing ...
	I1010 18:21:40.516190  325699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem
	I1010 18:21:40.516240  325699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem (1675 bytes)
	I1010 18:21:40.516317  325699 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem, removing ...
	I1010 18:21:40.516328  325699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem
	I1010 18:21:40.516365  325699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem (1082 bytes)
	I1010 18:21:40.516437  325699 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-821769 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-821769 localhost minikube]
	I1010 18:21:40.621000  325699 provision.go:177] copyRemoteCerts
	I1010 18:21:40.621136  325699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:21:40.621199  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:40.639539  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:40.738484  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1010 18:21:40.758076  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1010 18:21:40.777450  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 18:21:40.796411  325699 provision.go:87] duration metric: took 300.38696ms to configureAuth
	I1010 18:21:40.796439  325699 ubuntu.go:206] setting minikube options for container-runtime
	I1010 18:21:40.796606  325699 config.go:182] Loaded profile config "default-k8s-diff-port-821769": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:21:40.796693  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:40.814633  325699 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:40.814851  325699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1010 18:21:40.814874  325699 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:21:41.126788  325699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:21:41.126818  325699 machine.go:96] duration metric: took 4.127117296s to provisionDockerMachine
	I1010 18:21:41.126831  325699 start.go:293] postStartSetup for "default-k8s-diff-port-821769" (driver="docker")
	I1010 18:21:41.126845  325699 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:21:41.126909  325699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:21:41.126956  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:41.146094  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:41.244401  325699 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:21:41.247953  325699 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1010 18:21:41.247984  325699 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1010 18:21:41.247996  325699 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/addons for local assets ...
	I1010 18:21:41.248060  325699 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/files for local assets ...
	I1010 18:21:41.248175  325699 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem -> 93542.pem in /etc/ssl/certs
	I1010 18:21:41.248266  325699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:21:41.256669  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:21:41.275845  325699 start.go:296] duration metric: took 149.001179ms for postStartSetup
	I1010 18:21:41.275913  325699 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1010 18:21:41.275950  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:41.294158  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:41.387292  325699 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1010 18:21:41.391952  325699 fix.go:56] duration metric: took 4.745025215s for fixHost
	I1010 18:21:41.391980  325699 start.go:83] releasing machines lock for "default-k8s-diff-port-821769", held for 4.745085816s
	I1010 18:21:41.392032  325699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-821769
	I1010 18:21:41.410356  325699 ssh_runner.go:195] Run: cat /version.json
	I1010 18:21:41.410400  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:41.410462  325699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:21:41.410537  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:41.428673  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:41.429174  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:39.960290  324649 kubeadm.go:883] updating cluster {Name:newest-cni-121129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 18:21:39.960390  324649 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:21:39.960442  324649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:21:39.991643  324649 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:21:39.991664  324649 crio.go:433] Images already preloaded, skipping extraction
	I1010 18:21:39.991716  324649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:21:40.018213  324649 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:21:40.018233  324649 cache_images.go:85] Images are preloaded, skipping loading
	I1010 18:21:40.018240  324649 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1010 18:21:40.018331  324649 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-121129 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:21:40.018427  324649 ssh_runner.go:195] Run: crio config
	I1010 18:21:40.065330  324649 cni.go:84] Creating CNI manager for ""
	I1010 18:21:40.065358  324649 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:21:40.065375  324649 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1010 18:21:40.065395  324649 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-121129 NodeName:newest-cni-121129 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 18:21:40.065508  324649 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-121129"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 18:21:40.065561  324649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1010 18:21:40.074911  324649 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 18:21:40.074973  324649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 18:21:40.083566  324649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1010 18:21:40.097986  324649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:21:40.114282  324649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1010 18:21:40.128847  324649 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1010 18:21:40.132698  324649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:21:40.143413  324649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:21:40.227094  324649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:21:40.249628  324649 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129 for IP: 192.168.85.2
	I1010 18:21:40.249652  324649 certs.go:195] generating shared ca certs ...
	I1010 18:21:40.249678  324649 certs.go:227] acquiring lock for ca certs: {Name:mkd2ebf34e0d6ec3a7809bed8325fdc7fe2fcc31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:40.249833  324649 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key
	I1010 18:21:40.249870  324649 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key
	I1010 18:21:40.249880  324649 certs.go:257] generating profile certs ...
	I1010 18:21:40.249964  324649 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.key
	I1010 18:21:40.249986  324649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.crt with IP's: []
	I1010 18:21:40.601463  324649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.crt ...
	I1010 18:21:40.601490  324649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.crt: {Name:mk644ed6d675dd6a538c02d2c8e614b2a15b3122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:40.601663  324649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.key ...
	I1010 18:21:40.601672  324649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.key: {Name:mk914b6f6ffa18eaa800e7d301f088828f088f03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:40.601751  324649 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key.89f266b7
	I1010 18:21:40.601767  324649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.crt.89f266b7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1010 18:21:41.352224  324649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.crt.89f266b7 ...
	I1010 18:21:41.352248  324649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.crt.89f266b7: {Name:mkdef5060ad4b077648f6c85a78fa3bbbb5e73d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:41.352404  324649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key.89f266b7 ...
	I1010 18:21:41.352424  324649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key.89f266b7: {Name:mkfea0f84cddcdc4e3c69624946502bcf937c477 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:41.352501  324649 certs.go:382] copying /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.crt.89f266b7 -> /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.crt
	I1010 18:21:41.352570  324649 certs.go:386] copying /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key.89f266b7 -> /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key
	I1010 18:21:41.352640  324649 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.key
	I1010 18:21:41.352657  324649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.crt with IP's: []
	I1010 18:21:41.590793  325699 ssh_runner.go:195] Run: systemctl --version
	I1010 18:21:41.597352  325699 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:21:41.632391  325699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:21:41.637267  325699 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:21:41.637329  325699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:21:41.646619  325699 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1010 18:21:41.646643  325699 start.go:495] detecting cgroup driver to use...
	I1010 18:21:41.646672  325699 detect.go:190] detected "systemd" cgroup driver on host os
	I1010 18:21:41.646707  325699 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:21:41.662702  325699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:21:41.675945  325699 docker.go:218] disabling cri-docker service (if available) ...
	I1010 18:21:41.675998  325699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:21:41.690577  325699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:21:41.703139  325699 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:21:41.785080  325699 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:21:41.887442  325699 docker.go:234] disabling docker service ...
	I1010 18:21:41.887510  325699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:21:41.902511  325699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:21:41.915792  325699 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:21:41.998153  325699 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:21:42.082320  325699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:21:42.095388  325699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:21:42.110606  325699 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1010 18:21:42.110668  325699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.120566  325699 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1010 18:21:42.120611  325699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.130445  325699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.140220  325699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.149997  325699 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:21:42.159172  325699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.168739  325699 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.177930  325699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.187922  325699 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:21:42.196256  325699 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:21:42.204604  325699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:21:42.288532  325699 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:21:42.425073  325699 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:21:42.425143  325699 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:21:42.429651  325699 start.go:563] Will wait 60s for crictl version
	I1010 18:21:42.429707  325699 ssh_runner.go:195] Run: which crictl
	I1010 18:21:42.433310  325699 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1010 18:21:42.459422  325699 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1010 18:21:42.459511  325699 ssh_runner.go:195] Run: crio --version
	I1010 18:21:42.491064  325699 ssh_runner.go:195] Run: crio --version
	I1010 18:21:42.523177  325699 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1010 18:21:42.524273  325699 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-821769 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 18:21:42.544600  325699 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1010 18:21:42.549336  325699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:21:42.561250  325699 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-821769 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-821769 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 18:21:42.561363  325699 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:21:42.561407  325699 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:21:42.595069  325699 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:21:42.595092  325699 crio.go:433] Images already preloaded, skipping extraction
	I1010 18:21:42.595137  325699 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:21:42.621683  325699 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:21:42.621708  325699 cache_images.go:85] Images are preloaded, skipping loading
	I1010 18:21:42.621718  325699 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.34.1 crio true true} ...
	I1010 18:21:42.621877  325699 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-821769 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-821769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:21:42.621955  325699 ssh_runner.go:195] Run: crio config
	I1010 18:21:42.670696  325699 cni.go:84] Creating CNI manager for ""
	I1010 18:21:42.670714  325699 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:21:42.670729  325699 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1010 18:21:42.670749  325699 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-821769 NodeName:default-k8s-diff-port-821769 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 18:21:42.670867  325699 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-821769"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 18:21:42.670920  325699 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1010 18:21:42.679913  325699 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 18:21:42.679968  325699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 18:21:42.688618  325699 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1010 18:21:42.703331  325699 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:21:42.718311  325699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1010 18:21:42.732968  325699 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1010 18:21:42.736868  325699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:21:42.747553  325699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:21:42.829086  325699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:21:42.858574  325699 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769 for IP: 192.168.103.2
	I1010 18:21:42.858598  325699 certs.go:195] generating shared ca certs ...
	I1010 18:21:42.858623  325699 certs.go:227] acquiring lock for ca certs: {Name:mkd2ebf34e0d6ec3a7809bed8325fdc7fe2fcc31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:42.858780  325699 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key
	I1010 18:21:42.858834  325699 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key
	I1010 18:21:42.858849  325699 certs.go:257] generating profile certs ...
	I1010 18:21:42.858967  325699 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/client.key
	I1010 18:21:42.859085  325699 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/apiserver.key.10168654
	I1010 18:21:42.859140  325699 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/proxy-client.key
	I1010 18:21:42.859285  325699 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem (1338 bytes)
	W1010 18:21:42.859321  325699 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354_empty.pem, impossibly tiny 0 bytes
	I1010 18:21:42.859336  325699 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem (1675 bytes)
	I1010 18:21:42.859370  325699 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem (1082 bytes)
	I1010 18:21:42.859399  325699 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:21:42.859429  325699 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem (1675 bytes)
	I1010 18:21:42.859481  325699 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:21:42.860204  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:21:42.882094  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:21:42.903468  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:21:42.925737  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1010 18:21:42.953372  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1010 18:21:42.973504  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 18:21:42.992899  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:21:43.011728  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 18:21:43.030624  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem --> /usr/share/ca-certificates/9354.pem (1338 bytes)
	I1010 18:21:43.049802  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /usr/share/ca-certificates/93542.pem (1708 bytes)
	I1010 18:21:43.070120  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:21:43.090039  325699 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 18:21:43.103785  325699 ssh_runner.go:195] Run: openssl version
	I1010 18:21:43.110111  325699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93542.pem && ln -fs /usr/share/ca-certificates/93542.pem /etc/ssl/certs/93542.pem"
	I1010 18:21:43.118950  325699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93542.pem
	I1010 18:21:43.122454  325699 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 17:36 /usr/share/ca-certificates/93542.pem
	I1010 18:21:43.122512  325699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93542.pem
	I1010 18:21:43.157901  325699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93542.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:21:43.167111  325699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:21:43.176248  325699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:21:43.179836  325699 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:21:43.179900  325699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:21:43.216894  325699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:21:43.226252  325699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9354.pem && ln -fs /usr/share/ca-certificates/9354.pem /etc/ssl/certs/9354.pem"
	I1010 18:21:43.235390  325699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9354.pem
	I1010 18:21:43.239321  325699 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 17:36 /usr/share/ca-certificates/9354.pem
	I1010 18:21:43.239380  325699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9354.pem
	I1010 18:21:43.273487  325699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9354.pem /etc/ssl/certs/51391683.0"
	I1010 18:21:43.282570  325699 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:21:43.286433  325699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 18:21:43.320357  325699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 18:21:43.361223  325699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 18:21:43.409478  325699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 18:21:43.456529  325699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 18:21:43.512033  325699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 18:21:43.568244  325699 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-821769 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-821769 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:21:43.568348  325699 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 18:21:43.568440  325699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 18:21:43.611528  325699 cri.go:89] found id: "1352ca41b0e7626fbf6ee43638506dfab18bd157572e9128f411ac1c5ae54538"
	I1010 18:21:43.611555  325699 cri.go:89] found id: "2aeadcb9e03cc805af5eff4f1b521299f31e4d618387d10eef543b4e95787f70"
	I1010 18:21:43.611560  325699 cri.go:89] found id: "6c6e229b2a8311cf4d60aad6c602e02c2923b5ba2309e536076e40579456e8e2"
	I1010 18:21:43.611565  325699 cri.go:89] found id: "c3f03c923ad6830325d9888fdf2ad9de25ac73298e25b5812f72951d65af2eec"
	I1010 18:21:43.611569  325699 cri.go:89] found id: ""
	I1010 18:21:43.611612  325699 ssh_runner.go:195] Run: sudo runc list -f json
	W1010 18:21:43.627173  325699 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:21:43Z" level=error msg="open /run/runc: no such file or directory"
	I1010 18:21:43.627256  325699 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 18:21:43.638581  325699 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1010 18:21:43.638602  325699 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1010 18:21:43.638652  325699 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 18:21:43.650423  325699 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 18:21:43.651568  325699 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-821769" does not appear in /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:21:43.652341  325699 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-5815/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-821769" cluster setting kubeconfig missing "default-k8s-diff-port-821769" context setting]
	I1010 18:21:43.653567  325699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:43.655682  325699 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 18:21:43.667709  325699 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1010 18:21:43.667743  325699 kubeadm.go:601] duration metric: took 29.134937ms to restartPrimaryControlPlane
	I1010 18:21:43.667753  325699 kubeadm.go:402] duration metric: took 99.518506ms to StartCluster
	I1010 18:21:43.667770  325699 settings.go:142] acquiring lock: {Name:mk32701f7c6313a55b8740f0862889585a36e8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:43.667845  325699 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:21:43.669889  325699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:43.670281  325699 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:21:43.670407  325699 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 18:21:43.670513  325699 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-821769"
	I1010 18:21:43.670534  325699 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-821769"
	W1010 18:21:43.670546  325699 addons.go:247] addon storage-provisioner should already be in state true
	I1010 18:21:43.670545  325699 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-821769"
	I1010 18:21:43.670572  325699 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-821769"
	I1010 18:21:43.670580  325699 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-821769"
	W1010 18:21:43.670582  325699 addons.go:247] addon dashboard should already be in state true
	I1010 18:21:43.670595  325699 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-821769"
	I1010 18:21:43.670677  325699 host.go:66] Checking if "default-k8s-diff-port-821769" exists ...
	I1010 18:21:43.670572  325699 host.go:66] Checking if "default-k8s-diff-port-821769" exists ...
	I1010 18:21:43.670904  325699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:21:43.671151  325699 config.go:182] Loaded profile config "default-k8s-diff-port-821769": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:21:43.671356  325699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:21:43.672130  325699 out.go:179] * Verifying Kubernetes components...
	I1010 18:21:43.672709  325699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:21:43.673037  325699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:21:43.701170  325699 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:21:43.703152  325699 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:21:43.703189  325699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 18:21:43.703293  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:43.709767  325699 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-821769"
	W1010 18:21:43.709840  325699 addons.go:247] addon default-storageclass should already be in state true
	I1010 18:21:43.709890  325699 host.go:66] Checking if "default-k8s-diff-port-821769" exists ...
	I1010 18:21:43.710622  325699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:21:43.711556  325699 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1010 18:21:43.715168  325699 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1010 18:21:43.716093  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1010 18:21:43.716116  325699 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1010 18:21:43.716174  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:43.745595  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:43.754680  325699 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 18:21:43.754766  325699 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 18:21:43.754853  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:43.766642  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:43.784887  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:43.856990  325699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:21:43.873309  325699 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-821769" to be "Ready" ...
	I1010 18:21:43.936166  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1010 18:21:43.936223  325699 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1010 18:21:43.955509  325699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 18:21:43.956951  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1010 18:21:43.956971  325699 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1010 18:21:43.985048  325699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:21:43.985772  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1010 18:21:43.986042  325699 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1010 18:21:44.008589  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1010 18:21:44.008614  325699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1010 18:21:44.034035  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1010 18:21:44.034165  325699 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1010 18:21:44.061163  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1010 18:21:44.061253  325699 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1010 18:21:44.112492  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1010 18:21:44.112518  325699 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1010 18:21:44.149803  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1010 18:21:44.149896  325699 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1010 18:21:44.172145  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1010 18:21:44.172172  325699 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1010 18:21:44.191656  325699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1010 18:21:45.474823  325699 node_ready.go:49] node "default-k8s-diff-port-821769" is "Ready"
	I1010 18:21:45.474857  325699 node_ready.go:38] duration metric: took 1.601510652s for node "default-k8s-diff-port-821769" to be "Ready" ...
	I1010 18:21:45.474873  325699 api_server.go:52] waiting for apiserver process to appear ...
	I1010 18:21:45.474923  325699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 18:21:45.570164  325699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.614616389s)
	I1010 18:21:46.101989  325699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.116012627s)
	I1010 18:21:46.102157  325699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.910456027s)
	I1010 18:21:46.102189  325699 api_server.go:72] duration metric: took 2.431862039s to wait for apiserver process to appear ...
	I1010 18:21:46.102205  325699 api_server.go:88] waiting for apiserver healthz status ...
	I1010 18:21:46.102226  325699 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1010 18:21:46.103626  325699 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-821769 addons enable metrics-server
	
	I1010 18:21:46.104750  325699 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1010 18:21:46.105672  325699 addons.go:514] duration metric: took 2.435260331s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1010 18:21:46.106650  325699 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 18:21:46.106667  325699 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 18:21:41.799013  324649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.crt ...
	I1010 18:21:41.799039  324649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.crt: {Name:mk0669ceb9e9a4f760f7827d6d6abc6856417c2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:41.799218  324649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.key ...
	I1010 18:21:41.799235  324649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.key: {Name:mk52379a2bae9262f9822bb1871c3d07af332ca7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:41.799416  324649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem (1338 bytes)
	W1010 18:21:41.799450  324649 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354_empty.pem, impossibly tiny 0 bytes
	I1010 18:21:41.799460  324649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem (1675 bytes)
	I1010 18:21:41.799483  324649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem (1082 bytes)
	I1010 18:21:41.799510  324649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:21:41.799531  324649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem (1675 bytes)
	I1010 18:21:41.799566  324649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:21:41.800117  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:21:41.825626  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:21:41.848227  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:21:41.869328  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1010 18:21:41.890966  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1010 18:21:41.911349  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 18:21:41.931728  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:21:41.958462  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 18:21:41.979538  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:21:42.001499  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem --> /usr/share/ca-certificates/9354.pem (1338 bytes)
	I1010 18:21:42.024216  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /usr/share/ca-certificates/93542.pem (1708 bytes)
	I1010 18:21:42.046423  324649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 18:21:42.061125  324649 ssh_runner.go:195] Run: openssl version
	I1010 18:21:42.067212  324649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:21:42.076458  324649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:21:42.080715  324649 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:21:42.080767  324649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:21:42.116344  324649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:21:42.126106  324649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9354.pem && ln -fs /usr/share/ca-certificates/9354.pem /etc/ssl/certs/9354.pem"
	I1010 18:21:42.135627  324649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9354.pem
	I1010 18:21:42.139483  324649 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 17:36 /usr/share/ca-certificates/9354.pem
	I1010 18:21:42.139535  324649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9354.pem
	I1010 18:21:42.177476  324649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9354.pem /etc/ssl/certs/51391683.0"
	I1010 18:21:42.187546  324649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93542.pem && ln -fs /usr/share/ca-certificates/93542.pem /etc/ssl/certs/93542.pem"
	I1010 18:21:42.196815  324649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93542.pem
	I1010 18:21:42.200662  324649 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 17:36 /usr/share/ca-certificates/93542.pem
	I1010 18:21:42.200712  324649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93542.pem
	I1010 18:21:42.244485  324649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93542.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:21:42.254296  324649 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:21:42.258045  324649 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1010 18:21:42.258109  324649 kubeadm.go:400] StartCluster: {Name:newest-cni-121129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:21:42.258208  324649 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 18:21:42.258261  324649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 18:21:42.287523  324649 cri.go:89] found id: ""
	I1010 18:21:42.287614  324649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 18:21:42.296824  324649 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 18:21:42.306175  324649 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1010 18:21:42.306236  324649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 18:21:42.314702  324649 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 18:21:42.314726  324649 kubeadm.go:157] found existing configuration files:
	
	I1010 18:21:42.314769  324649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 18:21:42.323157  324649 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 18:21:42.323223  324649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 18:21:42.331276  324649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 18:21:42.340535  324649 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 18:21:42.340585  324649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 18:21:42.349218  324649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 18:21:42.357935  324649 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 18:21:42.357997  324649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 18:21:42.366285  324649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 18:21:42.374774  324649 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 18:21:42.374815  324649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 18:21:42.383131  324649 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1010 18:21:42.446332  324649 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1010 18:21:42.516841  324649 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> CRI-O <==
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.776601859Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=14eb1e11-3464-4bc1-9b9c-9bbc8d779655 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.777759547Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-48chv/dashboard-metrics-scraper" id=e7f5eafe-3b9d-413b-ad27-ba57d72a0f50 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.778001642Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.783889713Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.784493Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.787424677Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.791730114Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.791757042Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.791779384Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.795551998Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.795576645Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.795593803Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.799875715Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.799903018Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.799925219Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.804079125Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.804104557Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.804127534Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.808265895Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.808295667Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.811312807Z" level=info msg="Created container dbb59932f2180fbc26c3cde4e4a30573e097fe9d42db58b7eefe8fcc9da4608b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-48chv/dashboard-metrics-scraper" id=e7f5eafe-3b9d-413b-ad27-ba57d72a0f50 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.81200158Z" level=info msg="Starting container: dbb59932f2180fbc26c3cde4e4a30573e097fe9d42db58b7eefe8fcc9da4608b" id=17e6bea5-595c-4307-9839-fe411367c43f name=/runtime.v1.RuntimeService/StartContainer
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.814487445Z" level=info msg="Started container" PID=1766 containerID=dbb59932f2180fbc26c3cde4e4a30573e097fe9d42db58b7eefe8fcc9da4608b description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-48chv/dashboard-metrics-scraper id=17e6bea5-595c-4307-9839-fe411367c43f name=/runtime.v1.RuntimeService/StartContainer sandboxID=449cae5dc8c3085577435cc7e27853826ce51869b1aaa874af05c8b924289951
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.955482779Z" level=info msg="Removing container: 3dd3b88126e06b79575c6a46574b53ce80605578969d25ec0da2a67da4a0adb1" id=748e23d6-2438-4e4e-a3e9-0922473bba15 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.965983752Z" level=info msg="Removed container 3dd3b88126e06b79575c6a46574b53ce80605578969d25ec0da2a67da4a0adb1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-48chv/dashboard-metrics-scraper" id=748e23d6-2438-4e4e-a3e9-0922473bba15 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	dbb59932f2180       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago       Exited              dashboard-metrics-scraper   3                   449cae5dc8c30       dashboard-metrics-scraper-6ffb444bf9-48chv   kubernetes-dashboard
	793d0e41cb7ae       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           27 seconds ago       Running             storage-provisioner         1                   8d7fb10b3e5de       storage-provisioner                          kube-system
	53e86f711eb6d       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago       Running             kubernetes-dashboard        0                   c88cebb0849c4       kubernetes-dashboard-855c9754f9-f6cpg        kubernetes-dashboard
	074c24fe6a917       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           57 seconds ago       Running             busybox                     1                   96054c8fed7e7       busybox                                      default
	ddf22487acac1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Exited              storage-provisioner         0                   8d7fb10b3e5de       storage-provisioner                          kube-system
	f6b933b2408d0       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           57 seconds ago       Running             coredns                     0                   32a78cad6c375       coredns-66bc5c9577-hrcxc                     kube-system
	106735404cace       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           57 seconds ago       Running             kube-proxy                  0                   d8d1b3e97327b       kube-proxy-bq985                             kube-system
	e0b6d3ae90667       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           57 seconds ago       Running             kindnet-cni                 0                   577c1e54cf642       kindnet-kpr69                                kube-system
	159136e63b21e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           About a minute ago   Running             kube-controller-manager     0                   ae6a41b8b8f9d       kube-controller-manager-embed-certs-472518   kube-system
	3622c66fa378c       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           About a minute ago   Running             kube-apiserver              0                   99f06788b944e       kube-apiserver-embed-certs-472518            kube-system
	a5c1be1847d40       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           About a minute ago   Running             kube-scheduler              0                   3e0cf1a2f5771       kube-scheduler-embed-certs-472518            kube-system
	a52804abc0e71       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           About a minute ago   Running             etcd                        0                   4c9c201463143       etcd-embed-certs-472518                      kube-system
	
	
	==> coredns [f6b933b2408d071de401c79bb1ddb49b0541cd7149531813e62215ccc3e7bf16] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44185 - 55502 "HINFO IN 949683366020793061.52074708829021787. udp 54 false 512" NXDOMAIN qr,rd,ra 129 0.183613778s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-472518
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-472518
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad692bf4ab89f0e135b80e730ae25010479ecc46
	                    minikube.k8s.io/name=embed-certs-472518
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_10T18_19_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 10 Oct 2025 18:19:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-472518
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 10 Oct 2025 18:21:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 10 Oct 2025 18:21:22 +0000   Fri, 10 Oct 2025 18:19:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 10 Oct 2025 18:21:22 +0000   Fri, 10 Oct 2025 18:19:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 10 Oct 2025 18:21:22 +0000   Fri, 10 Oct 2025 18:19:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 10 Oct 2025 18:21:22 +0000   Fri, 10 Oct 2025 18:20:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-472518
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 6694834041ede3e9eb1b67e168e90e0c
	  System UUID:                48a864d3-5370-4000-a149-d46b202f0181
	  Boot ID:                    830c8438-99e6-48ba-b543-66e651cad0c8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-66bc5c9577-hrcxc                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     112s
	  kube-system                 etcd-embed-certs-472518                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         118s
	  kube-system                 kindnet-kpr69                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-embed-certs-472518             250m (3%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-embed-certs-472518    200m (2%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-bq985                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-embed-certs-472518             100m (1%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-48chv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-f6cpg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 111s               kube-proxy       
	  Normal  Starting                 57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  118s               kubelet          Node embed-certs-472518 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s               kubelet          Node embed-certs-472518 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s               kubelet          Node embed-certs-472518 status is now: NodeHasSufficientPID
	  Normal  Starting                 118s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           114s               node-controller  Node embed-certs-472518 event: Registered Node embed-certs-472518 in Controller
	  Normal  NodeReady                101s               kubelet          Node embed-certs-472518 status is now: NodeReady
	  Normal  Starting                 62s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)  kubelet          Node embed-certs-472518 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)  kubelet          Node embed-certs-472518 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 62s)  kubelet          Node embed-certs-472518 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           55s                node-controller  Node embed-certs-472518 event: Registered Node embed-certs-472518 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 95 0c 3e 92 2e 08 06
	[  +0.052845] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 a5 06 76 2d e3 08 06
	[ +11.354316] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff fa c6 ff 04 55 d6 08 06
	[  +7.101927] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e6 9b 73 27 8c 80 08 06
	[  +0.000350] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 a5 06 76 2d e3 08 06
	[  +6.287191] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 27 2d 28 d6 46 08 06
	[  +0.000293] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa c6 ff 04 55 d6 08 06
	[Oct10 18:19] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 8c 22 f6 6b cf 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e 29 bf 13 20 f9 08 06
	[ +15.511156] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e d6 74 aa 27 d0 08 06
	[  +0.008495] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 af 05 d4 db d1 08 06
	[Oct10 18:20] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 0b 54 33 52 4e 08 06
	[  +0.000597] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 af 05 d4 db d1 08 06
	
	
	==> etcd [a52804abc0e7184b8ec037e1a9594b3794f50868b2f90978e95ba4f3dac34818] <==
	{"level":"warn","ts":"2025-10-10T18:20:50.611758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.620814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.631331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.643420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.662565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.672803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.682178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.692147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.701841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.710580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.722741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.733554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.742502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.751223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.769159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.785765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.794736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.802524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.818864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.828227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.836886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.850678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.859561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.867816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.954676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50880","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:21:50 up  1:04,  0 user,  load average: 5.81, 4.72, 2.99
	Linux embed-certs-472518 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e0b6d3ae90667b41d0180616bdfecaebc14631771bd3e49defdf3d111b564ad9] <==
	I1010 18:20:52.483666       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1010 18:20:52.485813       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1010 18:20:52.486012       1 main.go:148] setting mtu 1500 for CNI 
	I1010 18:20:52.486026       1 main.go:178] kindnetd IP family: "ipv4"
	I1010 18:20:52.486060       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-10T18:20:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1010 18:20:52.780915       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1010 18:20:52.780981       1 controller.go:381] "Waiting for informer caches to sync"
	I1010 18:20:52.781001       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1010 18:20:52.785740       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1010 18:21:22.782354       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1010 18:21:22.782357       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1010 18:21:22.782352       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1010 18:21:22.782364       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1010 18:21:24.181878       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1010 18:21:24.181935       1 metrics.go:72] Registering metrics
	I1010 18:21:24.182536       1 controller.go:711] "Syncing nftables rules"
	I1010 18:21:32.787156       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1010 18:21:32.787227       1 main.go:301] handling current node
	I1010 18:21:42.781210       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1010 18:21:42.781256       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3622c66fa378c4b8614e23f6545ac6151fa6ef096364723cbdd5d22677bc0ca9] <==
	I1010 18:20:51.761580       1 cache.go:39] Caches are synced for autoregister controller
	I1010 18:20:51.761640       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1010 18:20:51.740187       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1010 18:20:51.770725       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1010 18:20:51.795842       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1010 18:20:51.807173       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1010 18:20:51.837146       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1010 18:20:51.838681       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1010 18:20:51.838694       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1010 18:20:51.842183       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1010 18:20:51.842628       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1010 18:20:51.843503       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1010 18:20:51.843594       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1010 18:20:51.863562       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1010 18:20:51.870354       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1010 18:20:52.504040       1 controller.go:667] quota admission added evaluator for: namespaces
	I1010 18:20:52.558986       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1010 18:20:52.589519       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1010 18:20:52.599566       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1010 18:20:52.641354       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1010 18:20:52.697450       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.227.155"}
	I1010 18:20:52.714637       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.93.64"}
	I1010 18:20:55.405382       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1010 18:20:55.506295       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1010 18:20:55.655439       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [159136e63b21ef09e85b6efdc6b5a0f5be67f5af9a3516c5f8cae7be0af60846] <==
	I1010 18:20:55.052745       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1010 18:20:55.052864       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1010 18:20:55.052872       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1010 18:20:55.055903       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1010 18:20:55.057719       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1010 18:20:55.060070       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1010 18:20:55.063197       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1010 18:20:55.063197       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1010 18:20:55.063340       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1010 18:20:55.063427       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-472518"
	I1010 18:20:55.063480       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1010 18:20:55.064319       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1010 18:20:55.066674       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1010 18:20:55.067495       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1010 18:20:55.067755       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1010 18:20:55.067881       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1010 18:20:55.079186       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1010 18:20:55.084321       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1010 18:20:55.084420       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1010 18:20:55.084478       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1010 18:20:55.084536       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1010 18:20:55.084550       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1010 18:20:55.086701       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1010 18:20:55.089000       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1010 18:20:55.091355       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	
	
	==> kube-proxy [106735404cace4f8939a0b5039e3d3506588ed35258591de2b5d9b775beb2175] <==
	I1010 18:20:52.540633       1 server_linux.go:53] "Using iptables proxy"
	I1010 18:20:52.606915       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1010 18:20:52.713692       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1010 18:20:52.713745       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1010 18:20:52.713844       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1010 18:20:52.745153       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1010 18:20:52.745217       1 server_linux.go:132] "Using iptables Proxier"
	I1010 18:20:52.761719       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1010 18:20:52.763696       1 server.go:527] "Version info" version="v1.34.1"
	I1010 18:20:52.763847       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 18:20:52.768425       1 config.go:200] "Starting service config controller"
	I1010 18:20:52.768487       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1010 18:20:52.768900       1 config.go:403] "Starting serviceCIDR config controller"
	I1010 18:20:52.769115       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1010 18:20:52.769184       1 config.go:106] "Starting endpoint slice config controller"
	I1010 18:20:52.769202       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1010 18:20:52.769237       1 config.go:309] "Starting node config controller"
	I1010 18:20:52.769249       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1010 18:20:52.769257       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1010 18:20:52.871913       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1010 18:20:52.872073       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1010 18:20:52.872118       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a5c1be1847d40640048f86d96a7f93b4166d1688a8afd40971231c2b59f73202] <==
	I1010 18:20:49.927660       1 serving.go:386] Generated self-signed cert in-memory
	W1010 18:20:51.685824       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1010 18:20:51.685856       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1010 18:20:51.685867       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1010 18:20:51.685877       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1010 18:20:51.781278       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1010 18:20:51.781309       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 18:20:51.785636       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1010 18:20:51.785752       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1010 18:20:51.785765       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1010 18:20:51.785796       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1010 18:20:51.886842       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 10 18:20:55 embed-certs-472518 kubelet[732]: I1010 18:20:55.689096     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/da1e80de-882a-4c82-a6f9-ab96c978cfec-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-48chv\" (UID: \"da1e80de-882a-4c82-a6f9-ab96c978cfec\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-48chv"
	Oct 10 18:20:58 embed-certs-472518 kubelet[732]: I1010 18:20:58.853078     732 scope.go:117] "RemoveContainer" containerID="b0e574a1e8a9408f590c0e99b094b928f7c5676d97d98f572b4174ed603efc41"
	Oct 10 18:20:59 embed-certs-472518 kubelet[732]: I1010 18:20:59.859086     732 scope.go:117] "RemoveContainer" containerID="b0e574a1e8a9408f590c0e99b094b928f7c5676d97d98f572b4174ed603efc41"
	Oct 10 18:20:59 embed-certs-472518 kubelet[732]: I1010 18:20:59.859503     732 scope.go:117] "RemoveContainer" containerID="afb956e3828986ee14f1408cce96c5b3f289a23883e1878d2410dd28c1fa8639"
	Oct 10 18:20:59 embed-certs-472518 kubelet[732]: E1010 18:20:59.859708     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-48chv_kubernetes-dashboard(da1e80de-882a-4c82-a6f9-ab96c978cfec)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-48chv" podUID="da1e80de-882a-4c82-a6f9-ab96c978cfec"
	Oct 10 18:21:00 embed-certs-472518 kubelet[732]: I1010 18:21:00.864725     732 scope.go:117] "RemoveContainer" containerID="afb956e3828986ee14f1408cce96c5b3f289a23883e1878d2410dd28c1fa8639"
	Oct 10 18:21:00 embed-certs-472518 kubelet[732]: E1010 18:21:00.864945     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-48chv_kubernetes-dashboard(da1e80de-882a-4c82-a6f9-ab96c978cfec)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-48chv" podUID="da1e80de-882a-4c82-a6f9-ab96c978cfec"
	Oct 10 18:21:03 embed-certs-472518 kubelet[732]: I1010 18:21:03.885734     732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f6cpg" podStartSLOduration=1.70621527 podStartE2EDuration="8.885714272s" podCreationTimestamp="2025-10-10 18:20:55 +0000 UTC" firstStartedPulling="2025-10-10 18:20:55.976938383 +0000 UTC m=+7.302703098" lastFinishedPulling="2025-10-10 18:21:03.156437368 +0000 UTC m=+14.482202100" observedRunningTime="2025-10-10 18:21:03.885414976 +0000 UTC m=+15.211179713" watchObservedRunningTime="2025-10-10 18:21:03.885714272 +0000 UTC m=+15.211479008"
	Oct 10 18:21:09 embed-certs-472518 kubelet[732]: I1010 18:21:09.699128     732 scope.go:117] "RemoveContainer" containerID="afb956e3828986ee14f1408cce96c5b3f289a23883e1878d2410dd28c1fa8639"
	Oct 10 18:21:09 embed-certs-472518 kubelet[732]: I1010 18:21:09.891203     732 scope.go:117] "RemoveContainer" containerID="afb956e3828986ee14f1408cce96c5b3f289a23883e1878d2410dd28c1fa8639"
	Oct 10 18:21:09 embed-certs-472518 kubelet[732]: I1010 18:21:09.891464     732 scope.go:117] "RemoveContainer" containerID="3dd3b88126e06b79575c6a46574b53ce80605578969d25ec0da2a67da4a0adb1"
	Oct 10 18:21:09 embed-certs-472518 kubelet[732]: E1010 18:21:09.891686     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-48chv_kubernetes-dashboard(da1e80de-882a-4c82-a6f9-ab96c978cfec)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-48chv" podUID="da1e80de-882a-4c82-a6f9-ab96c978cfec"
	Oct 10 18:21:19 embed-certs-472518 kubelet[732]: I1010 18:21:19.699334     732 scope.go:117] "RemoveContainer" containerID="3dd3b88126e06b79575c6a46574b53ce80605578969d25ec0da2a67da4a0adb1"
	Oct 10 18:21:19 embed-certs-472518 kubelet[732]: E1010 18:21:19.699588     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-48chv_kubernetes-dashboard(da1e80de-882a-4c82-a6f9-ab96c978cfec)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-48chv" podUID="da1e80de-882a-4c82-a6f9-ab96c978cfec"
	Oct 10 18:21:22 embed-certs-472518 kubelet[732]: I1010 18:21:22.923601     732 scope.go:117] "RemoveContainer" containerID="ddf22487acac1f44767d6faad43efdcb55e126e4d543b64497d3614254c5e0d5"
	Oct 10 18:21:32 embed-certs-472518 kubelet[732]: I1010 18:21:32.775020     732 scope.go:117] "RemoveContainer" containerID="3dd3b88126e06b79575c6a46574b53ce80605578969d25ec0da2a67da4a0adb1"
	Oct 10 18:21:32 embed-certs-472518 kubelet[732]: I1010 18:21:32.954067     732 scope.go:117] "RemoveContainer" containerID="3dd3b88126e06b79575c6a46574b53ce80605578969d25ec0da2a67da4a0adb1"
	Oct 10 18:21:32 embed-certs-472518 kubelet[732]: I1010 18:21:32.954318     732 scope.go:117] "RemoveContainer" containerID="dbb59932f2180fbc26c3cde4e4a30573e097fe9d42db58b7eefe8fcc9da4608b"
	Oct 10 18:21:32 embed-certs-472518 kubelet[732]: E1010 18:21:32.954541     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-48chv_kubernetes-dashboard(da1e80de-882a-4c82-a6f9-ab96c978cfec)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-48chv" podUID="da1e80de-882a-4c82-a6f9-ab96c978cfec"
	Oct 10 18:21:39 embed-certs-472518 kubelet[732]: I1010 18:21:39.699019     732 scope.go:117] "RemoveContainer" containerID="dbb59932f2180fbc26c3cde4e4a30573e097fe9d42db58b7eefe8fcc9da4608b"
	Oct 10 18:21:39 embed-certs-472518 kubelet[732]: E1010 18:21:39.699244     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-48chv_kubernetes-dashboard(da1e80de-882a-4c82-a6f9-ab96c978cfec)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-48chv" podUID="da1e80de-882a-4c82-a6f9-ab96c978cfec"
	Oct 10 18:21:47 embed-certs-472518 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 10 18:21:47 embed-certs-472518 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 10 18:21:47 embed-certs-472518 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 10 18:21:47 embed-certs-472518 systemd[1]: kubelet.service: Consumed 1.839s CPU time.
	
	
	==> kubernetes-dashboard [53e86f711eb6d6e029bf1dc5a1c14477be282ed5a7268cc1290a1a04c4d06252] <==
	2025/10/10 18:21:03 Starting overwatch
	2025/10/10 18:21:03 Using namespace: kubernetes-dashboard
	2025/10/10 18:21:03 Using in-cluster config to connect to apiserver
	2025/10/10 18:21:03 Using secret token for csrf signing
	2025/10/10 18:21:03 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/10 18:21:03 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/10 18:21:03 Successful initial request to the apiserver, version: v1.34.1
	2025/10/10 18:21:03 Generating JWE encryption key
	2025/10/10 18:21:03 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/10 18:21:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/10 18:21:03 Initializing JWE encryption key from synchronized object
	2025/10/10 18:21:03 Creating in-cluster Sidecar client
	2025/10/10 18:21:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/10 18:21:03 Serving insecurely on HTTP port: 9090
	2025/10/10 18:21:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [793d0e41cb7aec4a0f299624e039a34166a5a6807a3d1eedf9e3849fcb6c50de] <==
	I1010 18:21:22.982155       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1010 18:21:22.982199       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1010 18:21:22.984441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:26.440354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:30.700720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:34.299127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:37.353568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:40.376248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:40.380564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1010 18:21:40.380691       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1010 18:21:40.380822       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a789c1a6-8b74-43de-be1d-69d02ac1d0c8", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-472518_b73d5408-e577-4024-974e-82622bdca229 became leader
	I1010 18:21:40.380840       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-472518_b73d5408-e577-4024-974e-82622bdca229!
	W1010 18:21:40.404794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:40.409720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1010 18:21:40.481038       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-472518_b73d5408-e577-4024-974e-82622bdca229!
	W1010 18:21:42.412928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:42.416985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:44.421261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:44.426977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:46.430224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:46.434377       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:48.437467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:48.441640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:50.445952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:50.451478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ddf22487acac1f44767d6faad43efdcb55e126e4d543b64497d3614254c5e0d5] <==
	I1010 18:20:52.470804       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1010 18:21:22.507231       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-472518 -n embed-certs-472518
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-472518 -n embed-certs-472518: exit status 2 (410.049841ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-472518 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-472518
helpers_test.go:243: (dbg) docker inspect embed-certs-472518:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2e7bf16e9ebb73fcfd92fb1e6d8f20354619815c38b49f886ca33e7e71b2139e",
	        "Created": "2025-10-10T18:19:36.31646399Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 315445,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-10T18:20:42.102857084Z",
	            "FinishedAt": "2025-10-10T18:20:41.266629383Z"
	        },
	        "Image": "sha256:84da1fc78d37190122f56c520913b0bfc454516bc5fdbdc209e2a5258afce8c3",
	        "ResolvConfPath": "/var/lib/docker/containers/2e7bf16e9ebb73fcfd92fb1e6d8f20354619815c38b49f886ca33e7e71b2139e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2e7bf16e9ebb73fcfd92fb1e6d8f20354619815c38b49f886ca33e7e71b2139e/hostname",
	        "HostsPath": "/var/lib/docker/containers/2e7bf16e9ebb73fcfd92fb1e6d8f20354619815c38b49f886ca33e7e71b2139e/hosts",
	        "LogPath": "/var/lib/docker/containers/2e7bf16e9ebb73fcfd92fb1e6d8f20354619815c38b49f886ca33e7e71b2139e/2e7bf16e9ebb73fcfd92fb1e6d8f20354619815c38b49f886ca33e7e71b2139e-json.log",
	        "Name": "/embed-certs-472518",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-472518:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-472518",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2e7bf16e9ebb73fcfd92fb1e6d8f20354619815c38b49f886ca33e7e71b2139e",
	                "LowerDir": "/var/lib/docker/overlay2/5fa96fd4ec73d503d3d3528d8c7b13f7ca1b0a64ecf18291fa642aa2e0a2033a-init/diff:/var/lib/docker/overlay2/9995a0af7efc4d83e8e62526a6cf13ffc5df3bab5cee59077c863040f7e3e58d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5fa96fd4ec73d503d3d3528d8c7b13f7ca1b0a64ecf18291fa642aa2e0a2033a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5fa96fd4ec73d503d3d3528d8c7b13f7ca1b0a64ecf18291fa642aa2e0a2033a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5fa96fd4ec73d503d3d3528d8c7b13f7ca1b0a64ecf18291fa642aa2e0a2033a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-472518",
	                "Source": "/var/lib/docker/volumes/embed-certs-472518/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-472518",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-472518",
	                "name.minikube.sigs.k8s.io": "embed-certs-472518",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1eaba29d650e742cb0aa0d02a484531c40045eacff0ab67a86619c74f99ba3af",
	            "SandboxKey": "/var/run/docker/netns/1eaba29d650e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-472518": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:0b:88:2b:88:2c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cbce2d732620a5010a9bb6fa38f48aa0b3fba945ed0c5927e2d54406158c8a77",
	                    "EndpointID": "49bf35f7d183b3ea09fba66178faaeb753d4bd17df51c224dd76e667fe1ba4f4",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-472518",
	                        "2e7bf16e9ebb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-472518 -n embed-certs-472518
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-472518 -n embed-certs-472518: exit status 2 (380.589164ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-472518 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-472518 logs -n 25: (1.469883817s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p old-k8s-version-141193 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p old-k8s-version-141193 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:21 UTC │
	│ stop    │ -p embed-certs-472518 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ addons  │ enable metrics-server -p no-preload-556024 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │                     │
	│ delete  │ -p disable-driver-mounts-523797                                                                                                                                                                                                               │ disable-driver-mounts-523797 │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p default-k8s-diff-port-821769 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:21 UTC │
	│ stop    │ -p no-preload-556024 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ addons  │ enable dashboard -p embed-certs-472518 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p embed-certs-472518 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:21 UTC │
	│ addons  │ enable dashboard -p no-preload-556024 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p no-preload-556024 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:21 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-821769 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-821769 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ image   │ old-k8s-version-141193 image list --format=json                                                                                                                                                                                               │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ pause   │ -p old-k8s-version-141193 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ delete  │ -p old-k8s-version-141193                                                                                                                                                                                                                     │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ delete  │ -p old-k8s-version-141193                                                                                                                                                                                                                     │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ start   │ -p newest-cni-121129 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-821769 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ start   │ -p default-k8s-diff-port-821769 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ image   │ no-preload-556024 image list --format=json                                                                                                                                                                                                    │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ pause   │ -p no-preload-556024 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ image   │ embed-certs-472518 image list --format=json                                                                                                                                                                                                   │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ pause   │ -p embed-certs-472518 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ delete  │ -p no-preload-556024                                                                                                                                                                                                                          │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/10 18:21:36
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 18:21:36.443972  325699 out.go:360] Setting OutFile to fd 1 ...
	I1010 18:21:36.444232  325699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:21:36.444242  325699 out.go:374] Setting ErrFile to fd 2...
	I1010 18:21:36.444246  325699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:21:36.444423  325699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 18:21:36.444868  325699 out.go:368] Setting JSON to false
	I1010 18:21:36.445989  325699 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3836,"bootTime":1760116660,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 18:21:36.446111  325699 start.go:141] virtualization: kvm guest
	I1010 18:21:36.447655  325699 out.go:179] * [default-k8s-diff-port-821769] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1010 18:21:36.451745  325699 out.go:179]   - MINIKUBE_LOCATION=21724
	I1010 18:21:36.451794  325699 notify.go:220] Checking for updates...
	I1010 18:21:36.453782  325699 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 18:21:36.454903  325699 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:21:36.456168  325699 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-5815/.minikube
	I1010 18:21:36.457303  325699 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 18:21:36.458541  325699 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 18:21:36.460107  325699 config.go:182] Loaded profile config "default-k8s-diff-port-821769": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:21:36.460644  325699 driver.go:421] Setting default libvirt URI to qemu:///system
	I1010 18:21:36.487553  325699 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1010 18:21:36.487706  325699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:21:36.548644  325699 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:82 SystemTime:2025-10-10 18:21:36.539560881 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:21:36.548787  325699 docker.go:318] overlay module found
	I1010 18:21:36.550878  325699 out.go:179] * Using the docker driver based on existing profile
	I1010 18:21:31.750233  324649 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1010 18:21:31.750529  324649 start.go:159] libmachine.API.Create for "newest-cni-121129" (driver="docker")
	I1010 18:21:31.750565  324649 client.go:168] LocalClient.Create starting
	I1010 18:21:31.750670  324649 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem
	I1010 18:21:31.750723  324649 main.go:141] libmachine: Decoding PEM data...
	I1010 18:21:31.750746  324649 main.go:141] libmachine: Parsing certificate...
	I1010 18:21:31.750822  324649 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem
	I1010 18:21:31.750849  324649 main.go:141] libmachine: Decoding PEM data...
	I1010 18:21:31.750864  324649 main.go:141] libmachine: Parsing certificate...
	I1010 18:21:31.751250  324649 cli_runner.go:164] Run: docker network inspect newest-cni-121129 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1010 18:21:31.769180  324649 cli_runner.go:211] docker network inspect newest-cni-121129 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1010 18:21:31.769299  324649 network_create.go:284] running [docker network inspect newest-cni-121129] to gather additional debugging logs...
	I1010 18:21:31.769325  324649 cli_runner.go:164] Run: docker network inspect newest-cni-121129
	W1010 18:21:31.785789  324649 cli_runner.go:211] docker network inspect newest-cni-121129 returned with exit code 1
	I1010 18:21:31.785839  324649 network_create.go:287] error running [docker network inspect newest-cni-121129]: docker network inspect newest-cni-121129: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-121129 not found
	I1010 18:21:31.785860  324649 network_create.go:289] output of [docker network inspect newest-cni-121129]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-121129 not found
	
	** /stderr **
	I1010 18:21:31.785985  324649 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 18:21:31.803517  324649 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3f8fb0c8a54c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1a:51:a2:ab:ca:d6} reservation:<nil>}
	I1010 18:21:31.804204  324649 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bdbbffbd65c1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:de:11:33:77:48:20} reservation:<nil>}
	I1010 18:21:31.804907  324649 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0b6a5dab2001 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:93:a5:d3:c3:8f} reservation:<nil>}
	I1010 18:21:31.805493  324649 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-62177a68d9eb IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5e:70:f2:a2:da:00} reservation:<nil>}
	I1010 18:21:31.806333  324649 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f75590}
	I1010 18:21:31.806360  324649 network_create.go:124] attempt to create docker network newest-cni-121129 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1010 18:21:31.806398  324649 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-121129 newest-cni-121129
	I1010 18:21:31.865994  324649 network_create.go:108] docker network newest-cni-121129 192.168.85.0/24 created
	I1010 18:21:31.866029  324649 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-121129" container
	I1010 18:21:31.866140  324649 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1010 18:21:31.883599  324649 cli_runner.go:164] Run: docker volume create newest-cni-121129 --label name.minikube.sigs.k8s.io=newest-cni-121129 --label created_by.minikube.sigs.k8s.io=true
	I1010 18:21:31.901755  324649 oci.go:103] Successfully created a docker volume newest-cni-121129
	I1010 18:21:31.901834  324649 cli_runner.go:164] Run: docker run --rm --name newest-cni-121129-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-121129 --entrypoint /usr/bin/test -v newest-cni-121129:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 -d /var/lib
	I1010 18:21:32.316917  324649 oci.go:107] Successfully prepared a docker volume newest-cni-121129
	I1010 18:21:32.316960  324649 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:21:32.316979  324649 kic.go:194] Starting extracting preloaded images to volume ...
	I1010 18:21:32.317041  324649 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-121129:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1010 18:21:36.215225  324649 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-121129:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 -I lz4 -xf /preloaded.tar -C /extractDir: (3.898129423s)
	I1010 18:21:36.215274  324649 kic.go:203] duration metric: took 3.898290657s to extract preloaded images to volume ...
	W1010 18:21:36.215394  324649 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1010 18:21:36.215437  324649 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1010 18:21:36.215483  324649 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1010 18:21:36.276319  324649 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-121129 --name newest-cni-121129 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-121129 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-121129 --network newest-cni-121129 --ip 192.168.85.2 --volume newest-cni-121129:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6
	I1010 18:21:36.552156  325699 start.go:305] selected driver: docker
	I1010 18:21:36.552182  325699 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-821769 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-821769 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:21:36.552263  325699 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 18:21:36.552888  325699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:21:36.619123  325699 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-10 18:21:36.608354336 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:21:36.619511  325699 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:21:36.619549  325699 cni.go:84] Creating CNI manager for ""
	I1010 18:21:36.619602  325699 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:21:36.619655  325699 start.go:349] cluster config:
	{Name:default-k8s-diff-port-821769 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-821769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:21:36.621174  325699 out.go:179] * Starting "default-k8s-diff-port-821769" primary control-plane node in "default-k8s-diff-port-821769" cluster
	I1010 18:21:36.623163  325699 cache.go:123] Beginning downloading kic base image for docker with crio
	I1010 18:21:36.624439  325699 out.go:179] * Pulling base image v0.0.48-1760103811-21724 ...
	I1010 18:21:36.625488  325699 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:21:36.625524  325699 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1010 18:21:36.625536  325699 cache.go:58] Caching tarball of preloaded images
	I1010 18:21:36.625602  325699 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon
	I1010 18:21:36.625620  325699 preload.go:233] Found /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 18:21:36.625631  325699 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1010 18:21:36.625748  325699 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/config.json ...
	I1010 18:21:36.646734  325699 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon, skipping pull
	I1010 18:21:36.646759  325699 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 exists in daemon, skipping load
	I1010 18:21:36.646779  325699 cache.go:232] Successfully downloaded all kic artifacts
	I1010 18:21:36.646809  325699 start.go:360] acquireMachinesLock for default-k8s-diff-port-821769: {Name:mk32364aa6b9096e7aa0195f0d450a3e04b4f6f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:21:36.646879  325699 start.go:364] duration metric: took 45.359µs to acquireMachinesLock for "default-k8s-diff-port-821769"
	I1010 18:21:36.646912  325699 start.go:96] Skipping create...Using existing machine configuration
	I1010 18:21:36.646922  325699 fix.go:54] fixHost starting: 
	I1010 18:21:36.647229  325699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:21:36.665115  325699 fix.go:112] recreateIfNeeded on default-k8s-diff-port-821769: state=Stopped err=<nil>
	W1010 18:21:36.665142  325699 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 18:21:36.566005  324649 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Running}}
	I1010 18:21:36.587637  324649 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:21:36.609439  324649 cli_runner.go:164] Run: docker exec newest-cni-121129 stat /var/lib/dpkg/alternatives/iptables
	I1010 18:21:36.654885  324649 oci.go:144] the created container "newest-cni-121129" has a running status.
	I1010 18:21:36.654911  324649 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa...
	I1010 18:21:37.150404  324649 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1010 18:21:37.181411  324649 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:21:37.202450  324649 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1010 18:21:37.202483  324649 kic_runner.go:114] Args: [docker exec --privileged newest-cni-121129 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1010 18:21:37.249728  324649 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:21:37.274026  324649 machine.go:93] provisionDockerMachine start ...
	I1010 18:21:37.274139  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:37.295767  324649 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:37.296119  324649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1010 18:21:37.296140  324649 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 18:21:37.433206  324649 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-121129
	
	I1010 18:21:37.433232  324649 ubuntu.go:182] provisioning hostname "newest-cni-121129"
	I1010 18:21:37.433293  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:37.451228  324649 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:37.451497  324649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1010 18:21:37.451516  324649 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-121129 && echo "newest-cni-121129" | sudo tee /etc/hostname
	I1010 18:21:37.593295  324649 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-121129
	
	I1010 18:21:37.593411  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:37.611384  324649 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:37.611592  324649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1010 18:21:37.611611  324649 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-121129' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-121129/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-121129' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:21:37.744646  324649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:21:37.744678  324649 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-5815/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-5815/.minikube}
	I1010 18:21:37.744702  324649 ubuntu.go:190] setting up certificates
	I1010 18:21:37.744714  324649 provision.go:84] configureAuth start
	I1010 18:21:37.744775  324649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-121129
	I1010 18:21:37.762585  324649 provision.go:143] copyHostCerts
	I1010 18:21:37.762636  324649 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem, removing ...
	I1010 18:21:37.762644  324649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem
	I1010 18:21:37.762711  324649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem (1082 bytes)
	I1010 18:21:37.762804  324649 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem, removing ...
	I1010 18:21:37.762812  324649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem
	I1010 18:21:37.762837  324649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem (1123 bytes)
	I1010 18:21:37.762889  324649 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem, removing ...
	I1010 18:21:37.762896  324649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem
	I1010 18:21:37.762918  324649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem (1675 bytes)
	I1010 18:21:37.762968  324649 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem org=jenkins.newest-cni-121129 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-121129]
	I1010 18:21:38.017732  324649 provision.go:177] copyRemoteCerts
	I1010 18:21:38.017792  324649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:21:38.017828  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:38.035754  324649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:21:38.135582  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1010 18:21:38.158372  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1010 18:21:38.177887  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 18:21:38.197335  324649 provision.go:87] duration metric: took 452.609625ms to configureAuth
	I1010 18:21:38.197361  324649 ubuntu.go:206] setting minikube options for container-runtime
	I1010 18:21:38.197520  324649 config.go:182] Loaded profile config "newest-cni-121129": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:21:38.197616  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:38.215693  324649 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:38.215929  324649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1010 18:21:38.215945  324649 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:21:38.487590  324649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:21:38.487615  324649 machine.go:96] duration metric: took 1.213566349s to provisionDockerMachine
	I1010 18:21:38.487627  324649 client.go:171] duration metric: took 6.737054602s to LocalClient.Create
	I1010 18:21:38.487644  324649 start.go:167] duration metric: took 6.737116946s to libmachine.API.Create "newest-cni-121129"
	I1010 18:21:38.487653  324649 start.go:293] postStartSetup for "newest-cni-121129" (driver="docker")
	I1010 18:21:38.487667  324649 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:21:38.487718  324649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:21:38.487755  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:38.505301  324649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:21:38.604755  324649 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:21:38.608251  324649 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1010 18:21:38.608275  324649 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1010 18:21:38.608284  324649 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/addons for local assets ...
	I1010 18:21:38.608338  324649 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/files for local assets ...
	I1010 18:21:38.608407  324649 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem -> 93542.pem in /etc/ssl/certs
	I1010 18:21:38.608505  324649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:21:38.617071  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:21:38.639238  324649 start.go:296] duration metric: took 151.569017ms for postStartSetup
	I1010 18:21:38.639632  324649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-121129
	I1010 18:21:38.658650  324649 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/config.json ...
	I1010 18:21:38.658910  324649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1010 18:21:38.658972  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:38.676393  324649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:21:38.770086  324649 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1010 18:21:38.774771  324649 start.go:128] duration metric: took 7.026418609s to createHost
	I1010 18:21:38.774799  324649 start.go:83] releasing machines lock for "newest-cni-121129", held for 7.026572954s
	I1010 18:21:38.774867  324649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-121129
	I1010 18:21:38.794249  324649 ssh_runner.go:195] Run: cat /version.json
	I1010 18:21:38.794292  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:38.794343  324649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:21:38.794395  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:38.812781  324649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:21:38.813044  324649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:21:38.964620  324649 ssh_runner.go:195] Run: systemctl --version
	I1010 18:21:38.971493  324649 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:21:39.008047  324649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:21:39.012702  324649 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:21:39.012768  324649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:21:39.043167  324649 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 18:21:39.043195  324649 start.go:495] detecting cgroup driver to use...
	I1010 18:21:39.043236  324649 detect.go:190] detected "systemd" cgroup driver on host os
	I1010 18:21:39.043275  324649 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:21:39.060424  324649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:21:39.073422  324649 docker.go:218] disabling cri-docker service (if available) ...
	I1010 18:21:39.073477  324649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:21:39.090113  324649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:21:39.108184  324649 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:21:39.193075  324649 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:21:39.284238  324649 docker.go:234] disabling docker service ...
	I1010 18:21:39.284295  324649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:21:39.303174  324649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:21:39.316224  324649 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:21:39.401593  324649 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:21:39.486478  324649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:21:39.499671  324649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:21:39.515336  324649 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1010 18:21:39.515393  324649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.526705  324649 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1010 18:21:39.526768  324649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.536968  324649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.546772  324649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.556927  324649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:21:39.566265  324649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.576240  324649 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.591514  324649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.601231  324649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:21:39.609546  324649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:21:39.617339  324649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:21:39.697520  324649 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:21:39.833447  324649 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:21:39.833510  324649 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:21:39.837650  324649 start.go:563] Will wait 60s for crictl version
	I1010 18:21:39.837706  324649 ssh_runner.go:195] Run: which crictl
	I1010 18:21:39.841778  324649 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1010 18:21:39.866403  324649 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1010 18:21:39.866489  324649 ssh_runner.go:195] Run: crio --version
	I1010 18:21:39.894594  324649 ssh_runner.go:195] Run: crio --version
	I1010 18:21:39.923363  324649 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1010 18:21:39.924491  324649 cli_runner.go:164] Run: docker network inspect newest-cni-121129 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 18:21:39.942921  324649 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1010 18:21:39.947042  324649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:21:39.959308  324649 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1010 18:21:36.669200  325699 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-821769" ...
	I1010 18:21:36.669266  325699 cli_runner.go:164] Run: docker start default-k8s-diff-port-821769
	I1010 18:21:36.950209  325699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:21:36.973712  325699 kic.go:430] container "default-k8s-diff-port-821769" state is running.
	I1010 18:21:36.974205  325699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-821769
	I1010 18:21:36.999384  325699 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/config.json ...
	I1010 18:21:36.999678  325699 machine.go:93] provisionDockerMachine start ...
	I1010 18:21:36.999832  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:37.025140  325699 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:37.025476  325699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1010 18:21:37.025494  325699 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 18:21:37.026335  325699 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37242->127.0.0.1:33128: read: connection reset by peer
	I1010 18:21:40.162873  325699 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-821769
	
	I1010 18:21:40.162901  325699 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-821769"
	I1010 18:21:40.162999  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:40.189150  325699 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:40.189443  325699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1010 18:21:40.189466  325699 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-821769 && echo "default-k8s-diff-port-821769" | sudo tee /etc/hostname
	I1010 18:21:40.331478  325699 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-821769
	
	I1010 18:21:40.331570  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:40.349460  325699 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:40.349752  325699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1010 18:21:40.349789  325699 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-821769' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-821769/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-821769' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:21:40.495960  325699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:21:40.495988  325699 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-5815/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-5815/.minikube}
	I1010 18:21:40.496005  325699 ubuntu.go:190] setting up certificates
	I1010 18:21:40.496013  325699 provision.go:84] configureAuth start
	I1010 18:21:40.496106  325699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-821769
	I1010 18:21:40.515849  325699 provision.go:143] copyHostCerts
	I1010 18:21:40.515918  325699 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem, removing ...
	I1010 18:21:40.515937  325699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem
	I1010 18:21:40.516030  325699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem (1123 bytes)
	I1010 18:21:40.516170  325699 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem, removing ...
	I1010 18:21:40.516190  325699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem
	I1010 18:21:40.516240  325699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem (1675 bytes)
	I1010 18:21:40.516317  325699 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem, removing ...
	I1010 18:21:40.516328  325699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem
	I1010 18:21:40.516365  325699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem (1082 bytes)
	I1010 18:21:40.516437  325699 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-821769 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-821769 localhost minikube]
	I1010 18:21:40.621000  325699 provision.go:177] copyRemoteCerts
	I1010 18:21:40.621136  325699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:21:40.621199  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:40.639539  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:40.738484  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1010 18:21:40.758076  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1010 18:21:40.777450  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 18:21:40.796411  325699 provision.go:87] duration metric: took 300.38696ms to configureAuth
	I1010 18:21:40.796439  325699 ubuntu.go:206] setting minikube options for container-runtime
	I1010 18:21:40.796606  325699 config.go:182] Loaded profile config "default-k8s-diff-port-821769": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:21:40.796693  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:40.814633  325699 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:40.814851  325699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1010 18:21:40.814874  325699 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:21:41.126788  325699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:21:41.126818  325699 machine.go:96] duration metric: took 4.127117296s to provisionDockerMachine
	I1010 18:21:41.126831  325699 start.go:293] postStartSetup for "default-k8s-diff-port-821769" (driver="docker")
	I1010 18:21:41.126845  325699 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:21:41.126909  325699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:21:41.126956  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:41.146094  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:41.244401  325699 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:21:41.247953  325699 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1010 18:21:41.247984  325699 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1010 18:21:41.247996  325699 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/addons for local assets ...
	I1010 18:21:41.248060  325699 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/files for local assets ...
	I1010 18:21:41.248175  325699 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem -> 93542.pem in /etc/ssl/certs
	I1010 18:21:41.248266  325699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:21:41.256669  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:21:41.275845  325699 start.go:296] duration metric: took 149.001179ms for postStartSetup
	I1010 18:21:41.275913  325699 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1010 18:21:41.275950  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:41.294158  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:41.387292  325699 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1010 18:21:41.391952  325699 fix.go:56] duration metric: took 4.745025215s for fixHost
	I1010 18:21:41.391980  325699 start.go:83] releasing machines lock for "default-k8s-diff-port-821769", held for 4.745085816s
	I1010 18:21:41.392032  325699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-821769
	I1010 18:21:41.410356  325699 ssh_runner.go:195] Run: cat /version.json
	I1010 18:21:41.410400  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:41.410462  325699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:21:41.410537  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:41.428673  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:41.429174  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:39.960290  324649 kubeadm.go:883] updating cluster {Name:newest-cni-121129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 18:21:39.960390  324649 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:21:39.960442  324649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:21:39.991643  324649 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:21:39.991664  324649 crio.go:433] Images already preloaded, skipping extraction
	I1010 18:21:39.991716  324649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:21:40.018213  324649 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:21:40.018233  324649 cache_images.go:85] Images are preloaded, skipping loading
	I1010 18:21:40.018240  324649 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1010 18:21:40.018331  324649 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-121129 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:21:40.018427  324649 ssh_runner.go:195] Run: crio config
	I1010 18:21:40.065330  324649 cni.go:84] Creating CNI manager for ""
	I1010 18:21:40.065358  324649 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:21:40.065375  324649 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1010 18:21:40.065395  324649 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-121129 NodeName:newest-cni-121129 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 18:21:40.065508  324649 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-121129"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 18:21:40.065561  324649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1010 18:21:40.074911  324649 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 18:21:40.074973  324649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 18:21:40.083566  324649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1010 18:21:40.097986  324649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:21:40.114282  324649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1010 18:21:40.128847  324649 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1010 18:21:40.132698  324649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:21:40.143413  324649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:21:40.227094  324649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:21:40.249628  324649 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129 for IP: 192.168.85.2
	I1010 18:21:40.249652  324649 certs.go:195] generating shared ca certs ...
	I1010 18:21:40.249678  324649 certs.go:227] acquiring lock for ca certs: {Name:mkd2ebf34e0d6ec3a7809bed8325fdc7fe2fcc31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:40.249833  324649 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key
	I1010 18:21:40.249870  324649 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key
	I1010 18:21:40.249880  324649 certs.go:257] generating profile certs ...
	I1010 18:21:40.249964  324649 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.key
	I1010 18:21:40.249986  324649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.crt with IP's: []
	I1010 18:21:40.601463  324649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.crt ...
	I1010 18:21:40.601490  324649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.crt: {Name:mk644ed6d675dd6a538c02d2c8e614b2a15b3122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:40.601663  324649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.key ...
	I1010 18:21:40.601672  324649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.key: {Name:mk914b6f6ffa18eaa800e7d301f088828f088f03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:40.601751  324649 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key.89f266b7
	I1010 18:21:40.601767  324649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.crt.89f266b7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1010 18:21:41.352224  324649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.crt.89f266b7 ...
	I1010 18:21:41.352248  324649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.crt.89f266b7: {Name:mkdef5060ad4b077648f6c85a78fa3bbbb5e73d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:41.352404  324649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key.89f266b7 ...
	I1010 18:21:41.352424  324649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key.89f266b7: {Name:mkfea0f84cddcdc4e3c69624946502bcf937c477 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:41.352501  324649 certs.go:382] copying /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.crt.89f266b7 -> /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.crt
	I1010 18:21:41.352570  324649 certs.go:386] copying /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key.89f266b7 -> /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key
	I1010 18:21:41.352640  324649 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.key
	I1010 18:21:41.352657  324649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.crt with IP's: []
	I1010 18:21:41.590793  325699 ssh_runner.go:195] Run: systemctl --version
	I1010 18:21:41.597352  325699 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:21:41.632391  325699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:21:41.637267  325699 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:21:41.637329  325699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:21:41.646619  325699 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1010 18:21:41.646643  325699 start.go:495] detecting cgroup driver to use...
	I1010 18:21:41.646672  325699 detect.go:190] detected "systemd" cgroup driver on host os
	I1010 18:21:41.646707  325699 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:21:41.662702  325699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:21:41.675945  325699 docker.go:218] disabling cri-docker service (if available) ...
	I1010 18:21:41.675998  325699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:21:41.690577  325699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:21:41.703139  325699 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:21:41.785080  325699 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:21:41.887442  325699 docker.go:234] disabling docker service ...
	I1010 18:21:41.887510  325699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:21:41.902511  325699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:21:41.915792  325699 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:21:41.998153  325699 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:21:42.082320  325699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:21:42.095388  325699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:21:42.110606  325699 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1010 18:21:42.110668  325699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.120566  325699 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1010 18:21:42.120611  325699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.130445  325699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.140220  325699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.149997  325699 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:21:42.159172  325699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.168739  325699 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.177930  325699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.187922  325699 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:21:42.196256  325699 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:21:42.204604  325699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:21:42.288532  325699 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:21:42.425073  325699 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:21:42.425143  325699 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:21:42.429651  325699 start.go:563] Will wait 60s for crictl version
	I1010 18:21:42.429707  325699 ssh_runner.go:195] Run: which crictl
	I1010 18:21:42.433310  325699 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1010 18:21:42.459422  325699 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1010 18:21:42.459511  325699 ssh_runner.go:195] Run: crio --version
	I1010 18:21:42.491064  325699 ssh_runner.go:195] Run: crio --version
	I1010 18:21:42.523177  325699 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1010 18:21:42.524273  325699 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-821769 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 18:21:42.544600  325699 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1010 18:21:42.549336  325699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:21:42.561250  325699 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-821769 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-821769 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 18:21:42.561363  325699 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:21:42.561407  325699 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:21:42.595069  325699 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:21:42.595092  325699 crio.go:433] Images already preloaded, skipping extraction
	I1010 18:21:42.595137  325699 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:21:42.621683  325699 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:21:42.621708  325699 cache_images.go:85] Images are preloaded, skipping loading
	I1010 18:21:42.621718  325699 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.34.1 crio true true} ...
	I1010 18:21:42.621877  325699 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-821769 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-821769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:21:42.621955  325699 ssh_runner.go:195] Run: crio config
	I1010 18:21:42.670696  325699 cni.go:84] Creating CNI manager for ""
	I1010 18:21:42.670714  325699 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:21:42.670729  325699 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1010 18:21:42.670749  325699 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-821769 NodeName:default-k8s-diff-port-821769 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 18:21:42.670867  325699 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-821769"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 18:21:42.670920  325699 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1010 18:21:42.679913  325699 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 18:21:42.679968  325699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 18:21:42.688618  325699 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1010 18:21:42.703331  325699 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:21:42.718311  325699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1010 18:21:42.732968  325699 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1010 18:21:42.736868  325699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:21:42.747553  325699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:21:42.829086  325699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:21:42.858574  325699 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769 for IP: 192.168.103.2
	I1010 18:21:42.858598  325699 certs.go:195] generating shared ca certs ...
	I1010 18:21:42.858623  325699 certs.go:227] acquiring lock for ca certs: {Name:mkd2ebf34e0d6ec3a7809bed8325fdc7fe2fcc31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:42.858780  325699 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key
	I1010 18:21:42.858834  325699 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key
	I1010 18:21:42.858849  325699 certs.go:257] generating profile certs ...
	I1010 18:21:42.858967  325699 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/client.key
	I1010 18:21:42.859085  325699 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/apiserver.key.10168654
	I1010 18:21:42.859140  325699 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/proxy-client.key
	I1010 18:21:42.859285  325699 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem (1338 bytes)
	W1010 18:21:42.859321  325699 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354_empty.pem, impossibly tiny 0 bytes
	I1010 18:21:42.859336  325699 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem (1675 bytes)
	I1010 18:21:42.859370  325699 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem (1082 bytes)
	I1010 18:21:42.859399  325699 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:21:42.859429  325699 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem (1675 bytes)
	I1010 18:21:42.859481  325699 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:21:42.860204  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:21:42.882094  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:21:42.903468  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:21:42.925737  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1010 18:21:42.953372  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1010 18:21:42.973504  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 18:21:42.992899  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:21:43.011728  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 18:21:43.030624  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem --> /usr/share/ca-certificates/9354.pem (1338 bytes)
	I1010 18:21:43.049802  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /usr/share/ca-certificates/93542.pem (1708 bytes)
	I1010 18:21:43.070120  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:21:43.090039  325699 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 18:21:43.103785  325699 ssh_runner.go:195] Run: openssl version
	I1010 18:21:43.110111  325699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93542.pem && ln -fs /usr/share/ca-certificates/93542.pem /etc/ssl/certs/93542.pem"
	I1010 18:21:43.118950  325699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93542.pem
	I1010 18:21:43.122454  325699 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 17:36 /usr/share/ca-certificates/93542.pem
	I1010 18:21:43.122512  325699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93542.pem
	I1010 18:21:43.157901  325699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93542.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:21:43.167111  325699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:21:43.176248  325699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:21:43.179836  325699 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:21:43.179900  325699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:21:43.216894  325699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:21:43.226252  325699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9354.pem && ln -fs /usr/share/ca-certificates/9354.pem /etc/ssl/certs/9354.pem"
	I1010 18:21:43.235390  325699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9354.pem
	I1010 18:21:43.239321  325699 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 17:36 /usr/share/ca-certificates/9354.pem
	I1010 18:21:43.239380  325699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9354.pem
	I1010 18:21:43.273487  325699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9354.pem /etc/ssl/certs/51391683.0"
	I1010 18:21:43.282570  325699 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:21:43.286433  325699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 18:21:43.320357  325699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 18:21:43.361223  325699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 18:21:43.409478  325699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 18:21:43.456529  325699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 18:21:43.512033  325699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 18:21:43.568244  325699 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-821769 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-821769 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:21:43.568348  325699 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 18:21:43.568440  325699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 18:21:43.611528  325699 cri.go:89] found id: "1352ca41b0e7626fbf6ee43638506dfab18bd157572e9128f411ac1c5ae54538"
	I1010 18:21:43.611555  325699 cri.go:89] found id: "2aeadcb9e03cc805af5eff4f1b521299f31e4d618387d10eef543b4e95787f70"
	I1010 18:21:43.611560  325699 cri.go:89] found id: "6c6e229b2a8311cf4d60aad6c602e02c2923b5ba2309e536076e40579456e8e2"
	I1010 18:21:43.611565  325699 cri.go:89] found id: "c3f03c923ad6830325d9888fdf2ad9de25ac73298e25b5812f72951d65af2eec"
	I1010 18:21:43.611569  325699 cri.go:89] found id: ""
	I1010 18:21:43.611612  325699 ssh_runner.go:195] Run: sudo runc list -f json
	W1010 18:21:43.627173  325699 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:21:43Z" level=error msg="open /run/runc: no such file or directory"
	I1010 18:21:43.627256  325699 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 18:21:43.638581  325699 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1010 18:21:43.638602  325699 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1010 18:21:43.638652  325699 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 18:21:43.650423  325699 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 18:21:43.651568  325699 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-821769" does not appear in /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:21:43.652341  325699 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-5815/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-821769" cluster setting kubeconfig missing "default-k8s-diff-port-821769" context setting]
	I1010 18:21:43.653567  325699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:43.655682  325699 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 18:21:43.667709  325699 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1010 18:21:43.667743  325699 kubeadm.go:601] duration metric: took 29.134937ms to restartPrimaryControlPlane
	I1010 18:21:43.667753  325699 kubeadm.go:402] duration metric: took 99.518506ms to StartCluster
	I1010 18:21:43.667770  325699 settings.go:142] acquiring lock: {Name:mk32701f7c6313a55b8740f0862889585a36e8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:43.667845  325699 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:21:43.669889  325699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:43.670281  325699 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:21:43.670407  325699 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 18:21:43.670513  325699 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-821769"
	I1010 18:21:43.670534  325699 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-821769"
	W1010 18:21:43.670546  325699 addons.go:247] addon storage-provisioner should already be in state true
	I1010 18:21:43.670545  325699 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-821769"
	I1010 18:21:43.670572  325699 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-821769"
	I1010 18:21:43.670580  325699 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-821769"
	W1010 18:21:43.670582  325699 addons.go:247] addon dashboard should already be in state true
	I1010 18:21:43.670595  325699 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-821769"
	I1010 18:21:43.670677  325699 host.go:66] Checking if "default-k8s-diff-port-821769" exists ...
	I1010 18:21:43.670572  325699 host.go:66] Checking if "default-k8s-diff-port-821769" exists ...
	I1010 18:21:43.670904  325699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:21:43.671151  325699 config.go:182] Loaded profile config "default-k8s-diff-port-821769": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:21:43.671356  325699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:21:43.672130  325699 out.go:179] * Verifying Kubernetes components...
	I1010 18:21:43.672709  325699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:21:43.673037  325699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:21:43.701170  325699 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:21:43.703152  325699 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:21:43.703189  325699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 18:21:43.703293  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:43.709767  325699 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-821769"
	W1010 18:21:43.709840  325699 addons.go:247] addon default-storageclass should already be in state true
	I1010 18:21:43.709890  325699 host.go:66] Checking if "default-k8s-diff-port-821769" exists ...
	I1010 18:21:43.710622  325699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:21:43.711556  325699 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1010 18:21:43.715168  325699 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1010 18:21:43.716093  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1010 18:21:43.716116  325699 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1010 18:21:43.716174  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:43.745595  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:43.754680  325699 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 18:21:43.754766  325699 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 18:21:43.754853  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:43.766642  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:43.784887  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:43.856990  325699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:21:43.873309  325699 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-821769" to be "Ready" ...
	I1010 18:21:43.936166  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1010 18:21:43.936223  325699 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1010 18:21:43.955509  325699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 18:21:43.956951  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1010 18:21:43.956971  325699 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1010 18:21:43.985048  325699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:21:43.985772  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1010 18:21:43.986042  325699 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1010 18:21:44.008589  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1010 18:21:44.008614  325699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1010 18:21:44.034035  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1010 18:21:44.034165  325699 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1010 18:21:44.061163  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1010 18:21:44.061253  325699 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1010 18:21:44.112492  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1010 18:21:44.112518  325699 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1010 18:21:44.149803  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1010 18:21:44.149896  325699 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1010 18:21:44.172145  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1010 18:21:44.172172  325699 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1010 18:21:44.191656  325699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1010 18:21:45.474823  325699 node_ready.go:49] node "default-k8s-diff-port-821769" is "Ready"
	I1010 18:21:45.474857  325699 node_ready.go:38] duration metric: took 1.601510652s for node "default-k8s-diff-port-821769" to be "Ready" ...
	I1010 18:21:45.474873  325699 api_server.go:52] waiting for apiserver process to appear ...
	I1010 18:21:45.474923  325699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 18:21:45.570164  325699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.614616389s)
	I1010 18:21:46.101989  325699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.116012627s)
	I1010 18:21:46.102157  325699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.910456027s)
	I1010 18:21:46.102189  325699 api_server.go:72] duration metric: took 2.431862039s to wait for apiserver process to appear ...
	I1010 18:21:46.102205  325699 api_server.go:88] waiting for apiserver healthz status ...
	I1010 18:21:46.102226  325699 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1010 18:21:46.103626  325699 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-821769 addons enable metrics-server
	
	I1010 18:21:46.104750  325699 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1010 18:21:46.105672  325699 addons.go:514] duration metric: took 2.435260331s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1010 18:21:46.106650  325699 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 18:21:46.106667  325699 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 18:21:41.799013  324649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.crt ...
	I1010 18:21:41.799039  324649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.crt: {Name:mk0669ceb9e9a4f760f7827d6d6abc6856417c2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:41.799218  324649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.key ...
	I1010 18:21:41.799235  324649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.key: {Name:mk52379a2bae9262f9822bb1871c3d07af332ca7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:41.799416  324649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem (1338 bytes)
	W1010 18:21:41.799450  324649 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354_empty.pem, impossibly tiny 0 bytes
	I1010 18:21:41.799460  324649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem (1675 bytes)
	I1010 18:21:41.799483  324649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem (1082 bytes)
	I1010 18:21:41.799510  324649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:21:41.799531  324649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem (1675 bytes)
	I1010 18:21:41.799566  324649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:21:41.800117  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:21:41.825626  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:21:41.848227  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:21:41.869328  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1010 18:21:41.890966  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1010 18:21:41.911349  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 18:21:41.931728  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:21:41.958462  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 18:21:41.979538  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:21:42.001499  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem --> /usr/share/ca-certificates/9354.pem (1338 bytes)
	I1010 18:21:42.024216  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /usr/share/ca-certificates/93542.pem (1708 bytes)
	I1010 18:21:42.046423  324649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 18:21:42.061125  324649 ssh_runner.go:195] Run: openssl version
	I1010 18:21:42.067212  324649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:21:42.076458  324649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:21:42.080715  324649 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:21:42.080767  324649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:21:42.116344  324649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:21:42.126106  324649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9354.pem && ln -fs /usr/share/ca-certificates/9354.pem /etc/ssl/certs/9354.pem"
	I1010 18:21:42.135627  324649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9354.pem
	I1010 18:21:42.139483  324649 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 17:36 /usr/share/ca-certificates/9354.pem
	I1010 18:21:42.139535  324649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9354.pem
	I1010 18:21:42.177476  324649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9354.pem /etc/ssl/certs/51391683.0"
	I1010 18:21:42.187546  324649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93542.pem && ln -fs /usr/share/ca-certificates/93542.pem /etc/ssl/certs/93542.pem"
	I1010 18:21:42.196815  324649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93542.pem
	I1010 18:21:42.200662  324649 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 17:36 /usr/share/ca-certificates/93542.pem
	I1010 18:21:42.200712  324649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93542.pem
	I1010 18:21:42.244485  324649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93542.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:21:42.254296  324649 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:21:42.258045  324649 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1010 18:21:42.258109  324649 kubeadm.go:400] StartCluster: {Name:newest-cni-121129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:21:42.258208  324649 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 18:21:42.258261  324649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 18:21:42.287523  324649 cri.go:89] found id: ""
	I1010 18:21:42.287614  324649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 18:21:42.296824  324649 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 18:21:42.306175  324649 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1010 18:21:42.306236  324649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 18:21:42.314702  324649 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 18:21:42.314726  324649 kubeadm.go:157] found existing configuration files:
	
	I1010 18:21:42.314769  324649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 18:21:42.323157  324649 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 18:21:42.323223  324649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 18:21:42.331276  324649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 18:21:42.340535  324649 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 18:21:42.340585  324649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 18:21:42.349218  324649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 18:21:42.357935  324649 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 18:21:42.357997  324649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 18:21:42.366285  324649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 18:21:42.374774  324649 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 18:21:42.374815  324649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 18:21:42.383131  324649 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1010 18:21:42.446332  324649 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1010 18:21:42.516841  324649 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 18:21:46.602536  325699 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1010 18:21:46.609396  325699 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 18:21:46.609424  325699 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 18:21:47.102606  325699 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1010 18:21:47.109427  325699 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1010 18:21:47.110803  325699 api_server.go:141] control plane version: v1.34.1
	I1010 18:21:47.110830  325699 api_server.go:131] duration metric: took 1.008616848s to wait for apiserver health ...
	I1010 18:21:47.110841  325699 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 18:21:47.116316  325699 system_pods.go:59] 8 kube-system pods found
	I1010 18:21:47.116365  325699 system_pods.go:61] "coredns-66bc5c9577-wrz5v" [7a6485d8-d7c2-4cdc-a015-68b7754aa396] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:21:47.116444  325699 system_pods.go:61] "etcd-default-k8s-diff-port-821769" [b5edacc6-aaa2-4ee9-b0b1-330ce9248047] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 18:21:47.116486  325699 system_pods.go:61] "kindnet-4w475" [f4b100ab-44a4-49d1-bae7-d7dbdd293a80] Running
	I1010 18:21:47.116495  325699 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-821769" [d5671f82-586b-4ce8-954c-d0779d0759ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 18:21:47.116503  325699 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-821769" [04b0efc5-436e-4138-bbbc-ecb536f5118e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 18:21:47.116509  325699 system_pods.go:61] "kube-proxy-h2mzf" [0598db95-c0fc-49b8-a15b-26e4f96ed49c] Running
	I1010 18:21:47.116528  325699 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-821769" [e99518f9-57ed-46f5-b338-ba281829307d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 18:21:47.116534  325699 system_pods.go:61] "storage-provisioner" [63ba31a4-0bea-47b8-92f4-453fa7d83aea] Running
	I1010 18:21:47.116575  325699 system_pods.go:74] duration metric: took 5.693503ms to wait for pod list to return data ...
	I1010 18:21:47.116596  325699 default_sa.go:34] waiting for default service account to be created ...
	I1010 18:21:47.119751  325699 default_sa.go:45] found service account: "default"
	I1010 18:21:47.119766  325699 default_sa.go:55] duration metric: took 3.155339ms for default service account to be created ...
	I1010 18:21:47.119774  325699 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 18:21:47.123540  325699 system_pods.go:86] 8 kube-system pods found
	I1010 18:21:47.123568  325699 system_pods.go:89] "coredns-66bc5c9577-wrz5v" [7a6485d8-d7c2-4cdc-a015-68b7754aa396] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:21:47.123578  325699 system_pods.go:89] "etcd-default-k8s-diff-port-821769" [b5edacc6-aaa2-4ee9-b0b1-330ce9248047] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 18:21:47.123585  325699 system_pods.go:89] "kindnet-4w475" [f4b100ab-44a4-49d1-bae7-d7dbdd293a80] Running
	I1010 18:21:47.123597  325699 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-821769" [d5671f82-586b-4ce8-954c-d0779d0759ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 18:21:47.123606  325699 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-821769" [04b0efc5-436e-4138-bbbc-ecb536f5118e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 18:21:47.123612  325699 system_pods.go:89] "kube-proxy-h2mzf" [0598db95-c0fc-49b8-a15b-26e4f96ed49c] Running
	I1010 18:21:47.123619  325699 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-821769" [e99518f9-57ed-46f5-b338-ba281829307d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 18:21:47.123624  325699 system_pods.go:89] "storage-provisioner" [63ba31a4-0bea-47b8-92f4-453fa7d83aea] Running
	I1010 18:21:47.123632  325699 system_pods.go:126] duration metric: took 3.852363ms to wait for k8s-apps to be running ...
	I1010 18:21:47.123639  325699 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 18:21:47.123691  325699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:21:47.140804  325699 system_svc.go:56] duration metric: took 17.156579ms WaitForService to wait for kubelet
	I1010 18:21:47.140832  325699 kubeadm.go:586] duration metric: took 3.47051062s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:21:47.140854  325699 node_conditions.go:102] verifying NodePressure condition ...
	I1010 18:21:47.143989  325699 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1010 18:21:47.144014  325699 node_conditions.go:123] node cpu capacity is 8
	I1010 18:21:47.144037  325699 node_conditions.go:105] duration metric: took 3.169915ms to run NodePressure ...
	I1010 18:21:47.144073  325699 start.go:241] waiting for startup goroutines ...
	I1010 18:21:47.144085  325699 start.go:246] waiting for cluster config update ...
	I1010 18:21:47.144097  325699 start.go:255] writing updated cluster config ...
	I1010 18:21:47.144428  325699 ssh_runner.go:195] Run: rm -f paused
	I1010 18:21:47.148485  325699 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 18:21:47.152684  325699 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wrz5v" in "kube-system" namespace to be "Ready" or be gone ...
	W1010 18:21:49.159234  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	W1010 18:21:51.159917  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.776601859Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=14eb1e11-3464-4bc1-9b9c-9bbc8d779655 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.777759547Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-48chv/dashboard-metrics-scraper" id=e7f5eafe-3b9d-413b-ad27-ba57d72a0f50 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.778001642Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.783889713Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.784493Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.787424677Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.791730114Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.791757042Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.791779384Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.795551998Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.795576645Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.795593803Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.799875715Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.799903018Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.799925219Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.804079125Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.804104557Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.804127534Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.808265895Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.808295667Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.811312807Z" level=info msg="Created container dbb59932f2180fbc26c3cde4e4a30573e097fe9d42db58b7eefe8fcc9da4608b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-48chv/dashboard-metrics-scraper" id=e7f5eafe-3b9d-413b-ad27-ba57d72a0f50 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.81200158Z" level=info msg="Starting container: dbb59932f2180fbc26c3cde4e4a30573e097fe9d42db58b7eefe8fcc9da4608b" id=17e6bea5-595c-4307-9839-fe411367c43f name=/runtime.v1.RuntimeService/StartContainer
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.814487445Z" level=info msg="Started container" PID=1766 containerID=dbb59932f2180fbc26c3cde4e4a30573e097fe9d42db58b7eefe8fcc9da4608b description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-48chv/dashboard-metrics-scraper id=17e6bea5-595c-4307-9839-fe411367c43f name=/runtime.v1.RuntimeService/StartContainer sandboxID=449cae5dc8c3085577435cc7e27853826ce51869b1aaa874af05c8b924289951
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.955482779Z" level=info msg="Removing container: 3dd3b88126e06b79575c6a46574b53ce80605578969d25ec0da2a67da4a0adb1" id=748e23d6-2438-4e4e-a3e9-0922473bba15 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 10 18:21:32 embed-certs-472518 crio[570]: time="2025-10-10T18:21:32.965983752Z" level=info msg="Removed container 3dd3b88126e06b79575c6a46574b53ce80605578969d25ec0da2a67da4a0adb1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-48chv/dashboard-metrics-scraper" id=748e23d6-2438-4e4e-a3e9-0922473bba15 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	dbb59932f2180       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago       Exited              dashboard-metrics-scraper   3                   449cae5dc8c30       dashboard-metrics-scraper-6ffb444bf9-48chv   kubernetes-dashboard
	793d0e41cb7ae       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           29 seconds ago       Running             storage-provisioner         1                   8d7fb10b3e5de       storage-provisioner                          kube-system
	53e86f711eb6d       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   49 seconds ago       Running             kubernetes-dashboard        0                   c88cebb0849c4       kubernetes-dashboard-855c9754f9-f6cpg        kubernetes-dashboard
	074c24fe6a917       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           About a minute ago   Running             busybox                     1                   96054c8fed7e7       busybox                                      default
	ddf22487acac1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           About a minute ago   Exited              storage-provisioner         0                   8d7fb10b3e5de       storage-provisioner                          kube-system
	f6b933b2408d0       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           About a minute ago   Running             coredns                     0                   32a78cad6c375       coredns-66bc5c9577-hrcxc                     kube-system
	106735404cace       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           About a minute ago   Running             kube-proxy                  0                   d8d1b3e97327b       kube-proxy-bq985                             kube-system
	e0b6d3ae90667       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           About a minute ago   Running             kindnet-cni                 0                   577c1e54cf642       kindnet-kpr69                                kube-system
	159136e63b21e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           About a minute ago   Running             kube-controller-manager     0                   ae6a41b8b8f9d       kube-controller-manager-embed-certs-472518   kube-system
	3622c66fa378c       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           About a minute ago   Running             kube-apiserver              0                   99f06788b944e       kube-apiserver-embed-certs-472518            kube-system
	a5c1be1847d40       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           About a minute ago   Running             kube-scheduler              0                   3e0cf1a2f5771       kube-scheduler-embed-certs-472518            kube-system
	a52804abc0e71       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           About a minute ago   Running             etcd                        0                   4c9c201463143       etcd-embed-certs-472518                      kube-system
	
	
	==> coredns [f6b933b2408d071de401c79bb1ddb49b0541cd7149531813e62215ccc3e7bf16] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44185 - 55502 "HINFO IN 949683366020793061.52074708829021787. udp 54 false 512" NXDOMAIN qr,rd,ra 129 0.183613778s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-472518
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-472518
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad692bf4ab89f0e135b80e730ae25010479ecc46
	                    minikube.k8s.io/name=embed-certs-472518
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_10T18_19_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 10 Oct 2025 18:19:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-472518
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 10 Oct 2025 18:21:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 10 Oct 2025 18:21:22 +0000   Fri, 10 Oct 2025 18:19:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 10 Oct 2025 18:21:22 +0000   Fri, 10 Oct 2025 18:19:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 10 Oct 2025 18:21:22 +0000   Fri, 10 Oct 2025 18:19:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 10 Oct 2025 18:21:22 +0000   Fri, 10 Oct 2025 18:20:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-472518
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 6694834041ede3e9eb1b67e168e90e0c
	  System UUID:                48a864d3-5370-4000-a149-d46b202f0181
	  Boot ID:                    830c8438-99e6-48ba-b543-66e651cad0c8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 coredns-66bc5c9577-hrcxc                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     114s
	  kube-system                 etcd-embed-certs-472518                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m
	  kube-system                 kindnet-kpr69                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      115s
	  kube-system                 kube-apiserver-embed-certs-472518             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-embed-certs-472518    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-bq985                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-scheduler-embed-certs-472518             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-48chv    0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-f6cpg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 113s               kube-proxy       
	  Normal  Starting                 59s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m                 kubelet          Node embed-certs-472518 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m                 kubelet          Node embed-certs-472518 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m                 kubelet          Node embed-certs-472518 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           116s               node-controller  Node embed-certs-472518 event: Registered Node embed-certs-472518 in Controller
	  Normal  NodeReady                103s               kubelet          Node embed-certs-472518 status is now: NodeReady
	  Normal  Starting                 64s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  64s (x8 over 64s)  kubelet          Node embed-certs-472518 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    64s (x8 over 64s)  kubelet          Node embed-certs-472518 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     64s (x8 over 64s)  kubelet          Node embed-certs-472518 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           57s                node-controller  Node embed-certs-472518 event: Registered Node embed-certs-472518 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 95 0c 3e 92 2e 08 06
	[  +0.052845] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 a5 06 76 2d e3 08 06
	[ +11.354316] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff fa c6 ff 04 55 d6 08 06
	[  +7.101927] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e6 9b 73 27 8c 80 08 06
	[  +0.000350] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 a5 06 76 2d e3 08 06
	[  +6.287191] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 27 2d 28 d6 46 08 06
	[  +0.000293] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa c6 ff 04 55 d6 08 06
	[Oct10 18:19] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 8c 22 f6 6b cf 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e 29 bf 13 20 f9 08 06
	[ +15.511156] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e d6 74 aa 27 d0 08 06
	[  +0.008495] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 af 05 d4 db d1 08 06
	[Oct10 18:20] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 0b 54 33 52 4e 08 06
	[  +0.000597] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 af 05 d4 db d1 08 06
	
	
	==> etcd [a52804abc0e7184b8ec037e1a9594b3794f50868b2f90978e95ba4f3dac34818] <==
	{"level":"warn","ts":"2025-10-10T18:20:50.611758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.620814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.631331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.643420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.662565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.672803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.682178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.692147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.701841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.710580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.722741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.733554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.742502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.751223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.769159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.785765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.794736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.802524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.818864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.828227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.836886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.850678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.859561Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.867816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:20:50.954676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50880","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:21:52 up  1:04,  0 user,  load average: 6.39, 4.86, 3.04
	Linux embed-certs-472518 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e0b6d3ae90667b41d0180616bdfecaebc14631771bd3e49defdf3d111b564ad9] <==
	I1010 18:20:52.483666       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1010 18:20:52.485813       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1010 18:20:52.486012       1 main.go:148] setting mtu 1500 for CNI 
	I1010 18:20:52.486026       1 main.go:178] kindnetd IP family: "ipv4"
	I1010 18:20:52.486060       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-10T18:20:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1010 18:20:52.780915       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1010 18:20:52.780981       1 controller.go:381] "Waiting for informer caches to sync"
	I1010 18:20:52.781001       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1010 18:20:52.785740       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1010 18:21:22.782354       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1010 18:21:22.782357       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1010 18:21:22.782352       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1010 18:21:22.782364       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1010 18:21:24.181878       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1010 18:21:24.181935       1 metrics.go:72] Registering metrics
	I1010 18:21:24.182536       1 controller.go:711] "Syncing nftables rules"
	I1010 18:21:32.787156       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1010 18:21:32.787227       1 main.go:301] handling current node
	I1010 18:21:42.781210       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1010 18:21:42.781256       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3622c66fa378c4b8614e23f6545ac6151fa6ef096364723cbdd5d22677bc0ca9] <==
	I1010 18:20:51.761580       1 cache.go:39] Caches are synced for autoregister controller
	I1010 18:20:51.761640       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1010 18:20:51.740187       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1010 18:20:51.770725       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1010 18:20:51.795842       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1010 18:20:51.807173       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1010 18:20:51.837146       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1010 18:20:51.838681       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1010 18:20:51.838694       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1010 18:20:51.842183       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1010 18:20:51.842628       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1010 18:20:51.843503       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1010 18:20:51.843594       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1010 18:20:51.863562       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1010 18:20:51.870354       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1010 18:20:52.504040       1 controller.go:667] quota admission added evaluator for: namespaces
	I1010 18:20:52.558986       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1010 18:20:52.589519       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1010 18:20:52.599566       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1010 18:20:52.641354       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1010 18:20:52.697450       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.227.155"}
	I1010 18:20:52.714637       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.93.64"}
	I1010 18:20:55.405382       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1010 18:20:55.506295       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1010 18:20:55.655439       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [159136e63b21ef09e85b6efdc6b5a0f5be67f5af9a3516c5f8cae7be0af60846] <==
	I1010 18:20:55.052745       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1010 18:20:55.052864       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1010 18:20:55.052872       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1010 18:20:55.055903       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1010 18:20:55.057719       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1010 18:20:55.060070       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1010 18:20:55.063197       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1010 18:20:55.063197       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1010 18:20:55.063340       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1010 18:20:55.063427       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-472518"
	I1010 18:20:55.063480       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1010 18:20:55.064319       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1010 18:20:55.066674       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1010 18:20:55.067495       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1010 18:20:55.067755       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1010 18:20:55.067881       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1010 18:20:55.079186       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1010 18:20:55.084321       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1010 18:20:55.084420       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1010 18:20:55.084478       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1010 18:20:55.084536       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1010 18:20:55.084550       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1010 18:20:55.086701       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1010 18:20:55.089000       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1010 18:20:55.091355       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	
	
	==> kube-proxy [106735404cace4f8939a0b5039e3d3506588ed35258591de2b5d9b775beb2175] <==
	I1010 18:20:52.540633       1 server_linux.go:53] "Using iptables proxy"
	I1010 18:20:52.606915       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1010 18:20:52.713692       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1010 18:20:52.713745       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1010 18:20:52.713844       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1010 18:20:52.745153       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1010 18:20:52.745217       1 server_linux.go:132] "Using iptables Proxier"
	I1010 18:20:52.761719       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1010 18:20:52.763696       1 server.go:527] "Version info" version="v1.34.1"
	I1010 18:20:52.763847       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 18:20:52.768425       1 config.go:200] "Starting service config controller"
	I1010 18:20:52.768487       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1010 18:20:52.768900       1 config.go:403] "Starting serviceCIDR config controller"
	I1010 18:20:52.769115       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1010 18:20:52.769184       1 config.go:106] "Starting endpoint slice config controller"
	I1010 18:20:52.769202       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1010 18:20:52.769237       1 config.go:309] "Starting node config controller"
	I1010 18:20:52.769249       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1010 18:20:52.769257       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1010 18:20:52.871913       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1010 18:20:52.872073       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1010 18:20:52.872118       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a5c1be1847d40640048f86d96a7f93b4166d1688a8afd40971231c2b59f73202] <==
	I1010 18:20:49.927660       1 serving.go:386] Generated self-signed cert in-memory
	W1010 18:20:51.685824       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1010 18:20:51.685856       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1010 18:20:51.685867       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1010 18:20:51.685877       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1010 18:20:51.781278       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1010 18:20:51.781309       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 18:20:51.785636       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1010 18:20:51.785752       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1010 18:20:51.785765       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1010 18:20:51.785796       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1010 18:20:51.886842       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 10 18:20:55 embed-certs-472518 kubelet[732]: I1010 18:20:55.689096     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/da1e80de-882a-4c82-a6f9-ab96c978cfec-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-48chv\" (UID: \"da1e80de-882a-4c82-a6f9-ab96c978cfec\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-48chv"
	Oct 10 18:20:58 embed-certs-472518 kubelet[732]: I1010 18:20:58.853078     732 scope.go:117] "RemoveContainer" containerID="b0e574a1e8a9408f590c0e99b094b928f7c5676d97d98f572b4174ed603efc41"
	Oct 10 18:20:59 embed-certs-472518 kubelet[732]: I1010 18:20:59.859086     732 scope.go:117] "RemoveContainer" containerID="b0e574a1e8a9408f590c0e99b094b928f7c5676d97d98f572b4174ed603efc41"
	Oct 10 18:20:59 embed-certs-472518 kubelet[732]: I1010 18:20:59.859503     732 scope.go:117] "RemoveContainer" containerID="afb956e3828986ee14f1408cce96c5b3f289a23883e1878d2410dd28c1fa8639"
	Oct 10 18:20:59 embed-certs-472518 kubelet[732]: E1010 18:20:59.859708     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-48chv_kubernetes-dashboard(da1e80de-882a-4c82-a6f9-ab96c978cfec)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-48chv" podUID="da1e80de-882a-4c82-a6f9-ab96c978cfec"
	Oct 10 18:21:00 embed-certs-472518 kubelet[732]: I1010 18:21:00.864725     732 scope.go:117] "RemoveContainer" containerID="afb956e3828986ee14f1408cce96c5b3f289a23883e1878d2410dd28c1fa8639"
	Oct 10 18:21:00 embed-certs-472518 kubelet[732]: E1010 18:21:00.864945     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-48chv_kubernetes-dashboard(da1e80de-882a-4c82-a6f9-ab96c978cfec)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-48chv" podUID="da1e80de-882a-4c82-a6f9-ab96c978cfec"
	Oct 10 18:21:03 embed-certs-472518 kubelet[732]: I1010 18:21:03.885734     732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-f6cpg" podStartSLOduration=1.70621527 podStartE2EDuration="8.885714272s" podCreationTimestamp="2025-10-10 18:20:55 +0000 UTC" firstStartedPulling="2025-10-10 18:20:55.976938383 +0000 UTC m=+7.302703098" lastFinishedPulling="2025-10-10 18:21:03.156437368 +0000 UTC m=+14.482202100" observedRunningTime="2025-10-10 18:21:03.885414976 +0000 UTC m=+15.211179713" watchObservedRunningTime="2025-10-10 18:21:03.885714272 +0000 UTC m=+15.211479008"
	Oct 10 18:21:09 embed-certs-472518 kubelet[732]: I1010 18:21:09.699128     732 scope.go:117] "RemoveContainer" containerID="afb956e3828986ee14f1408cce96c5b3f289a23883e1878d2410dd28c1fa8639"
	Oct 10 18:21:09 embed-certs-472518 kubelet[732]: I1010 18:21:09.891203     732 scope.go:117] "RemoveContainer" containerID="afb956e3828986ee14f1408cce96c5b3f289a23883e1878d2410dd28c1fa8639"
	Oct 10 18:21:09 embed-certs-472518 kubelet[732]: I1010 18:21:09.891464     732 scope.go:117] "RemoveContainer" containerID="3dd3b88126e06b79575c6a46574b53ce80605578969d25ec0da2a67da4a0adb1"
	Oct 10 18:21:09 embed-certs-472518 kubelet[732]: E1010 18:21:09.891686     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-48chv_kubernetes-dashboard(da1e80de-882a-4c82-a6f9-ab96c978cfec)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-48chv" podUID="da1e80de-882a-4c82-a6f9-ab96c978cfec"
	Oct 10 18:21:19 embed-certs-472518 kubelet[732]: I1010 18:21:19.699334     732 scope.go:117] "RemoveContainer" containerID="3dd3b88126e06b79575c6a46574b53ce80605578969d25ec0da2a67da4a0adb1"
	Oct 10 18:21:19 embed-certs-472518 kubelet[732]: E1010 18:21:19.699588     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-48chv_kubernetes-dashboard(da1e80de-882a-4c82-a6f9-ab96c978cfec)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-48chv" podUID="da1e80de-882a-4c82-a6f9-ab96c978cfec"
	Oct 10 18:21:22 embed-certs-472518 kubelet[732]: I1010 18:21:22.923601     732 scope.go:117] "RemoveContainer" containerID="ddf22487acac1f44767d6faad43efdcb55e126e4d543b64497d3614254c5e0d5"
	Oct 10 18:21:32 embed-certs-472518 kubelet[732]: I1010 18:21:32.775020     732 scope.go:117] "RemoveContainer" containerID="3dd3b88126e06b79575c6a46574b53ce80605578969d25ec0da2a67da4a0adb1"
	Oct 10 18:21:32 embed-certs-472518 kubelet[732]: I1010 18:21:32.954067     732 scope.go:117] "RemoveContainer" containerID="3dd3b88126e06b79575c6a46574b53ce80605578969d25ec0da2a67da4a0adb1"
	Oct 10 18:21:32 embed-certs-472518 kubelet[732]: I1010 18:21:32.954318     732 scope.go:117] "RemoveContainer" containerID="dbb59932f2180fbc26c3cde4e4a30573e097fe9d42db58b7eefe8fcc9da4608b"
	Oct 10 18:21:32 embed-certs-472518 kubelet[732]: E1010 18:21:32.954541     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-48chv_kubernetes-dashboard(da1e80de-882a-4c82-a6f9-ab96c978cfec)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-48chv" podUID="da1e80de-882a-4c82-a6f9-ab96c978cfec"
	Oct 10 18:21:39 embed-certs-472518 kubelet[732]: I1010 18:21:39.699019     732 scope.go:117] "RemoveContainer" containerID="dbb59932f2180fbc26c3cde4e4a30573e097fe9d42db58b7eefe8fcc9da4608b"
	Oct 10 18:21:39 embed-certs-472518 kubelet[732]: E1010 18:21:39.699244     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-48chv_kubernetes-dashboard(da1e80de-882a-4c82-a6f9-ab96c978cfec)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-48chv" podUID="da1e80de-882a-4c82-a6f9-ab96c978cfec"
	Oct 10 18:21:47 embed-certs-472518 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 10 18:21:47 embed-certs-472518 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 10 18:21:47 embed-certs-472518 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 10 18:21:47 embed-certs-472518 systemd[1]: kubelet.service: Consumed 1.839s CPU time.
	
	
	==> kubernetes-dashboard [53e86f711eb6d6e029bf1dc5a1c14477be282ed5a7268cc1290a1a04c4d06252] <==
	2025/10/10 18:21:03 Using namespace: kubernetes-dashboard
	2025/10/10 18:21:03 Using in-cluster config to connect to apiserver
	2025/10/10 18:21:03 Using secret token for csrf signing
	2025/10/10 18:21:03 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/10 18:21:03 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/10 18:21:03 Successful initial request to the apiserver, version: v1.34.1
	2025/10/10 18:21:03 Generating JWE encryption key
	2025/10/10 18:21:03 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/10 18:21:03 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/10 18:21:03 Initializing JWE encryption key from synchronized object
	2025/10/10 18:21:03 Creating in-cluster Sidecar client
	2025/10/10 18:21:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/10 18:21:03 Serving insecurely on HTTP port: 9090
	2025/10/10 18:21:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/10 18:21:03 Starting overwatch
	
	
	==> storage-provisioner [793d0e41cb7aec4a0f299624e039a34166a5a6807a3d1eedf9e3849fcb6c50de] <==
	W1010 18:21:22.984441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:26.440354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:30.700720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:34.299127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:37.353568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:40.376248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:40.380564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1010 18:21:40.380691       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1010 18:21:40.380822       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a789c1a6-8b74-43de-be1d-69d02ac1d0c8", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-472518_b73d5408-e577-4024-974e-82622bdca229 became leader
	I1010 18:21:40.380840       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-472518_b73d5408-e577-4024-974e-82622bdca229!
	W1010 18:21:40.404794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:40.409720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1010 18:21:40.481038       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-472518_b73d5408-e577-4024-974e-82622bdca229!
	W1010 18:21:42.412928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:42.416985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:44.421261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:44.426977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:46.430224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:46.434377       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:48.437467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:48.441640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:50.445952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:50.451478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:52.455394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:21:52.460090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ddf22487acac1f44767d6faad43efdcb55e126e4d543b64497d3614254c5e0d5] <==
	I1010 18:20:52.470804       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1010 18:21:22.507231       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-472518 -n embed-certs-472518
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-472518 -n embed-certs-472518: exit status 2 (401.858945ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-472518 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-121129 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-121129 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (238.633008ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:22:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-121129 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-121129
helpers_test.go:243: (dbg) docker inspect newest-cni-121129:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "44f7c2aef6cca1948267e8f9a581073b0895bb8f16e8949e0db91550afe6a3a3",
	        "Created": "2025-10-10T18:21:36.293708252Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 325691,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-10T18:21:36.332894954Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:84da1fc78d37190122f56c520913b0bfc454516bc5fdbdc209e2a5258afce8c3",
	        "ResolvConfPath": "/var/lib/docker/containers/44f7c2aef6cca1948267e8f9a581073b0895bb8f16e8949e0db91550afe6a3a3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/44f7c2aef6cca1948267e8f9a581073b0895bb8f16e8949e0db91550afe6a3a3/hostname",
	        "HostsPath": "/var/lib/docker/containers/44f7c2aef6cca1948267e8f9a581073b0895bb8f16e8949e0db91550afe6a3a3/hosts",
	        "LogPath": "/var/lib/docker/containers/44f7c2aef6cca1948267e8f9a581073b0895bb8f16e8949e0db91550afe6a3a3/44f7c2aef6cca1948267e8f9a581073b0895bb8f16e8949e0db91550afe6a3a3-json.log",
	        "Name": "/newest-cni-121129",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-121129:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-121129",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "44f7c2aef6cca1948267e8f9a581073b0895bb8f16e8949e0db91550afe6a3a3",
	                "LowerDir": "/var/lib/docker/overlay2/bf33ff5e2644ca9451feb6b194bd1b9cfcbf6459017f40bc6397e87fbe55a746-init/diff:/var/lib/docker/overlay2/9995a0af7efc4d83e8e62526a6cf13ffc5df3bab5cee59077c863040f7e3e58d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bf33ff5e2644ca9451feb6b194bd1b9cfcbf6459017f40bc6397e87fbe55a746/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bf33ff5e2644ca9451feb6b194bd1b9cfcbf6459017f40bc6397e87fbe55a746/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bf33ff5e2644ca9451feb6b194bd1b9cfcbf6459017f40bc6397e87fbe55a746/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-121129",
	                "Source": "/var/lib/docker/volumes/newest-cni-121129/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-121129",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-121129",
	                "name.minikube.sigs.k8s.io": "newest-cni-121129",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9009225e8adbb18439fd201751883999311279365bb06eaf05c3d72722c77ad2",
	            "SandboxKey": "/var/run/docker/netns/9009225e8adb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-121129": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:54:06:ca:4c:52",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cd26f66f7d0715bf666ca6e5dc6891adf394cc9a58fe404ddf68c49d82b6f4c2",
	                    "EndpointID": "c128186fe6e1e422a950da5982d54107d83dcca348171668c90ca2bac75accc8",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-121129",
	                        "44f7c2aef6cc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-121129 -n newest-cni-121129
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-121129 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p disable-driver-mounts-523797                                                                                                                                                                                                               │ disable-driver-mounts-523797 │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p default-k8s-diff-port-821769 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:21 UTC │
	│ stop    │ -p no-preload-556024 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ addons  │ enable dashboard -p embed-certs-472518 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p embed-certs-472518 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:21 UTC │
	│ addons  │ enable dashboard -p no-preload-556024 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p no-preload-556024 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:21 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-821769 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-821769 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ image   │ old-k8s-version-141193 image list --format=json                                                                                                                                                                                               │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ pause   │ -p old-k8s-version-141193 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ delete  │ -p old-k8s-version-141193                                                                                                                                                                                                                     │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ delete  │ -p old-k8s-version-141193                                                                                                                                                                                                                     │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ start   │ -p newest-cni-121129 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-821769 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ start   │ -p default-k8s-diff-port-821769 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ image   │ no-preload-556024 image list --format=json                                                                                                                                                                                                    │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ pause   │ -p no-preload-556024 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ image   │ embed-certs-472518 image list --format=json                                                                                                                                                                                                   │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ pause   │ -p embed-certs-472518 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ delete  │ -p no-preload-556024                                                                                                                                                                                                                          │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ delete  │ -p no-preload-556024                                                                                                                                                                                                                          │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ delete  │ -p embed-certs-472518                                                                                                                                                                                                                         │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ delete  │ -p embed-certs-472518                                                                                                                                                                                                                         │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ addons  │ enable metrics-server -p newest-cni-121129 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/10 18:21:36
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 18:21:36.443972  325699 out.go:360] Setting OutFile to fd 1 ...
	I1010 18:21:36.444232  325699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:21:36.444242  325699 out.go:374] Setting ErrFile to fd 2...
	I1010 18:21:36.444246  325699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:21:36.444423  325699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 18:21:36.444868  325699 out.go:368] Setting JSON to false
	I1010 18:21:36.445989  325699 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3836,"bootTime":1760116660,"procs":298,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 18:21:36.446111  325699 start.go:141] virtualization: kvm guest
	I1010 18:21:36.447655  325699 out.go:179] * [default-k8s-diff-port-821769] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1010 18:21:36.451745  325699 out.go:179]   - MINIKUBE_LOCATION=21724
	I1010 18:21:36.451794  325699 notify.go:220] Checking for updates...
	I1010 18:21:36.453782  325699 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 18:21:36.454903  325699 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:21:36.456168  325699 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-5815/.minikube
	I1010 18:21:36.457303  325699 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 18:21:36.458541  325699 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 18:21:36.460107  325699 config.go:182] Loaded profile config "default-k8s-diff-port-821769": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:21:36.460644  325699 driver.go:421] Setting default libvirt URI to qemu:///system
	I1010 18:21:36.487553  325699 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1010 18:21:36.487706  325699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:21:36.548644  325699 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:82 SystemTime:2025-10-10 18:21:36.539560881 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:21:36.548787  325699 docker.go:318] overlay module found
	I1010 18:21:36.550878  325699 out.go:179] * Using the docker driver based on existing profile
	I1010 18:21:31.750233  324649 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1010 18:21:31.750529  324649 start.go:159] libmachine.API.Create for "newest-cni-121129" (driver="docker")
	I1010 18:21:31.750565  324649 client.go:168] LocalClient.Create starting
	I1010 18:21:31.750670  324649 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem
	I1010 18:21:31.750723  324649 main.go:141] libmachine: Decoding PEM data...
	I1010 18:21:31.750746  324649 main.go:141] libmachine: Parsing certificate...
	I1010 18:21:31.750822  324649 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem
	I1010 18:21:31.750849  324649 main.go:141] libmachine: Decoding PEM data...
	I1010 18:21:31.750864  324649 main.go:141] libmachine: Parsing certificate...
	I1010 18:21:31.751250  324649 cli_runner.go:164] Run: docker network inspect newest-cni-121129 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1010 18:21:31.769180  324649 cli_runner.go:211] docker network inspect newest-cni-121129 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1010 18:21:31.769299  324649 network_create.go:284] running [docker network inspect newest-cni-121129] to gather additional debugging logs...
	I1010 18:21:31.769325  324649 cli_runner.go:164] Run: docker network inspect newest-cni-121129
	W1010 18:21:31.785789  324649 cli_runner.go:211] docker network inspect newest-cni-121129 returned with exit code 1
	I1010 18:21:31.785839  324649 network_create.go:287] error running [docker network inspect newest-cni-121129]: docker network inspect newest-cni-121129: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-121129 not found
	I1010 18:21:31.785860  324649 network_create.go:289] output of [docker network inspect newest-cni-121129]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-121129 not found
	
	** /stderr **
	I1010 18:21:31.785985  324649 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 18:21:31.803517  324649 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3f8fb0c8a54c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1a:51:a2:ab:ca:d6} reservation:<nil>}
	I1010 18:21:31.804204  324649 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bdbbffbd65c1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:de:11:33:77:48:20} reservation:<nil>}
	I1010 18:21:31.804907  324649 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-0b6a5dab2001 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:4a:93:a5:d3:c3:8f} reservation:<nil>}
	I1010 18:21:31.805493  324649 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-62177a68d9eb IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5e:70:f2:a2:da:00} reservation:<nil>}
	I1010 18:21:31.806333  324649 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f75590}
	I1010 18:21:31.806360  324649 network_create.go:124] attempt to create docker network newest-cni-121129 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1010 18:21:31.806398  324649 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-121129 newest-cni-121129
	I1010 18:21:31.865994  324649 network_create.go:108] docker network newest-cni-121129 192.168.85.0/24 created
	I1010 18:21:31.866029  324649 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-121129" container
	I1010 18:21:31.866140  324649 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1010 18:21:31.883599  324649 cli_runner.go:164] Run: docker volume create newest-cni-121129 --label name.minikube.sigs.k8s.io=newest-cni-121129 --label created_by.minikube.sigs.k8s.io=true
	I1010 18:21:31.901755  324649 oci.go:103] Successfully created a docker volume newest-cni-121129
	I1010 18:21:31.901834  324649 cli_runner.go:164] Run: docker run --rm --name newest-cni-121129-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-121129 --entrypoint /usr/bin/test -v newest-cni-121129:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 -d /var/lib
	I1010 18:21:32.316917  324649 oci.go:107] Successfully prepared a docker volume newest-cni-121129
	I1010 18:21:32.316960  324649 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:21:32.316979  324649 kic.go:194] Starting extracting preloaded images to volume ...
	I1010 18:21:32.317041  324649 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-121129:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1010 18:21:36.215225  324649 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-121129:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 -I lz4 -xf /preloaded.tar -C /extractDir: (3.898129423s)
	I1010 18:21:36.215274  324649 kic.go:203] duration metric: took 3.898290657s to extract preloaded images to volume ...
	W1010 18:21:36.215394  324649 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1010 18:21:36.215437  324649 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1010 18:21:36.215483  324649 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1010 18:21:36.276319  324649 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-121129 --name newest-cni-121129 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-121129 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-121129 --network newest-cni-121129 --ip 192.168.85.2 --volume newest-cni-121129:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6
	I1010 18:21:36.552156  325699 start.go:305] selected driver: docker
	I1010 18:21:36.552182  325699 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-821769 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-821769 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:21:36.552263  325699 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 18:21:36.552888  325699 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:21:36.619123  325699 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-10 18:21:36.608354336 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:21:36.619511  325699 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:21:36.619549  325699 cni.go:84] Creating CNI manager for ""
	I1010 18:21:36.619602  325699 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:21:36.619655  325699 start.go:349] cluster config:
	{Name:default-k8s-diff-port-821769 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-821769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:21:36.621174  325699 out.go:179] * Starting "default-k8s-diff-port-821769" primary control-plane node in "default-k8s-diff-port-821769" cluster
	I1010 18:21:36.623163  325699 cache.go:123] Beginning downloading kic base image for docker with crio
	I1010 18:21:36.624439  325699 out.go:179] * Pulling base image v0.0.48-1760103811-21724 ...
	I1010 18:21:36.625488  325699 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:21:36.625524  325699 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1010 18:21:36.625536  325699 cache.go:58] Caching tarball of preloaded images
	I1010 18:21:36.625602  325699 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon
	I1010 18:21:36.625620  325699 preload.go:233] Found /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 18:21:36.625631  325699 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1010 18:21:36.625748  325699 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/config.json ...
	I1010 18:21:36.646734  325699 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon, skipping pull
	I1010 18:21:36.646759  325699 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 exists in daemon, skipping load
	I1010 18:21:36.646779  325699 cache.go:232] Successfully downloaded all kic artifacts
	I1010 18:21:36.646809  325699 start.go:360] acquireMachinesLock for default-k8s-diff-port-821769: {Name:mk32364aa6b9096e7aa0195f0d450a3e04b4f6f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:21:36.646879  325699 start.go:364] duration metric: took 45.359µs to acquireMachinesLock for "default-k8s-diff-port-821769"
	I1010 18:21:36.646912  325699 start.go:96] Skipping create...Using existing machine configuration
	I1010 18:21:36.646922  325699 fix.go:54] fixHost starting: 
	I1010 18:21:36.647229  325699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:21:36.665115  325699 fix.go:112] recreateIfNeeded on default-k8s-diff-port-821769: state=Stopped err=<nil>
	W1010 18:21:36.665142  325699 fix.go:138] unexpected machine state, will restart: <nil>
	I1010 18:21:36.566005  324649 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Running}}
	I1010 18:21:36.587637  324649 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:21:36.609439  324649 cli_runner.go:164] Run: docker exec newest-cni-121129 stat /var/lib/dpkg/alternatives/iptables
	I1010 18:21:36.654885  324649 oci.go:144] the created container "newest-cni-121129" has a running status.
	I1010 18:21:36.654911  324649 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa...
	I1010 18:21:37.150404  324649 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1010 18:21:37.181411  324649 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:21:37.202450  324649 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1010 18:21:37.202483  324649 kic_runner.go:114] Args: [docker exec --privileged newest-cni-121129 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1010 18:21:37.249728  324649 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:21:37.274026  324649 machine.go:93] provisionDockerMachine start ...
	I1010 18:21:37.274139  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:37.295767  324649 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:37.296119  324649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1010 18:21:37.296140  324649 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 18:21:37.433206  324649 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-121129
	
	I1010 18:21:37.433232  324649 ubuntu.go:182] provisioning hostname "newest-cni-121129"
	I1010 18:21:37.433293  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:37.451228  324649 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:37.451497  324649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1010 18:21:37.451516  324649 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-121129 && echo "newest-cni-121129" | sudo tee /etc/hostname
	I1010 18:21:37.593295  324649 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-121129
	
	I1010 18:21:37.593411  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:37.611384  324649 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:37.611592  324649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1010 18:21:37.611611  324649 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-121129' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-121129/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-121129' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:21:37.744646  324649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:21:37.744678  324649 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-5815/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-5815/.minikube}
	I1010 18:21:37.744702  324649 ubuntu.go:190] setting up certificates
	I1010 18:21:37.744714  324649 provision.go:84] configureAuth start
	I1010 18:21:37.744775  324649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-121129
	I1010 18:21:37.762585  324649 provision.go:143] copyHostCerts
	I1010 18:21:37.762636  324649 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem, removing ...
	I1010 18:21:37.762644  324649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem
	I1010 18:21:37.762711  324649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem (1082 bytes)
	I1010 18:21:37.762804  324649 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem, removing ...
	I1010 18:21:37.762812  324649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem
	I1010 18:21:37.762837  324649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem (1123 bytes)
	I1010 18:21:37.762889  324649 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem, removing ...
	I1010 18:21:37.762896  324649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem
	I1010 18:21:37.762918  324649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem (1675 bytes)
	I1010 18:21:37.762968  324649 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem org=jenkins.newest-cni-121129 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-121129]
	I1010 18:21:38.017732  324649 provision.go:177] copyRemoteCerts
	I1010 18:21:38.017792  324649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:21:38.017828  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:38.035754  324649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:21:38.135582  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1010 18:21:38.158372  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1010 18:21:38.177887  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 18:21:38.197335  324649 provision.go:87] duration metric: took 452.609625ms to configureAuth
	I1010 18:21:38.197361  324649 ubuntu.go:206] setting minikube options for container-runtime
	I1010 18:21:38.197520  324649 config.go:182] Loaded profile config "newest-cni-121129": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:21:38.197616  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:38.215693  324649 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:38.215929  324649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1010 18:21:38.215945  324649 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:21:38.487590  324649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:21:38.487615  324649 machine.go:96] duration metric: took 1.213566349s to provisionDockerMachine
	I1010 18:21:38.487627  324649 client.go:171] duration metric: took 6.737054602s to LocalClient.Create
	I1010 18:21:38.487644  324649 start.go:167] duration metric: took 6.737116946s to libmachine.API.Create "newest-cni-121129"
	I1010 18:21:38.487653  324649 start.go:293] postStartSetup for "newest-cni-121129" (driver="docker")
	I1010 18:21:38.487667  324649 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:21:38.487718  324649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:21:38.487755  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:38.505301  324649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:21:38.604755  324649 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:21:38.608251  324649 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1010 18:21:38.608275  324649 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1010 18:21:38.608284  324649 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/addons for local assets ...
	I1010 18:21:38.608338  324649 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/files for local assets ...
	I1010 18:21:38.608407  324649 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem -> 93542.pem in /etc/ssl/certs
	I1010 18:21:38.608505  324649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:21:38.617071  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:21:38.639238  324649 start.go:296] duration metric: took 151.569017ms for postStartSetup
	I1010 18:21:38.639632  324649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-121129
	I1010 18:21:38.658650  324649 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/config.json ...
	I1010 18:21:38.658910  324649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1010 18:21:38.658972  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:38.676393  324649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:21:38.770086  324649 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1010 18:21:38.774771  324649 start.go:128] duration metric: took 7.026418609s to createHost
	I1010 18:21:38.774799  324649 start.go:83] releasing machines lock for "newest-cni-121129", held for 7.026572954s
	I1010 18:21:38.774867  324649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-121129
	I1010 18:21:38.794249  324649 ssh_runner.go:195] Run: cat /version.json
	I1010 18:21:38.794292  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:38.794343  324649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:21:38.794395  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:38.812781  324649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:21:38.813044  324649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:21:38.964620  324649 ssh_runner.go:195] Run: systemctl --version
	I1010 18:21:38.971493  324649 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:21:39.008047  324649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:21:39.012702  324649 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:21:39.012768  324649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:21:39.043167  324649 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1010 18:21:39.043195  324649 start.go:495] detecting cgroup driver to use...
	I1010 18:21:39.043236  324649 detect.go:190] detected "systemd" cgroup driver on host os
	I1010 18:21:39.043275  324649 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:21:39.060424  324649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:21:39.073422  324649 docker.go:218] disabling cri-docker service (if available) ...
	I1010 18:21:39.073477  324649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:21:39.090113  324649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:21:39.108184  324649 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:21:39.193075  324649 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:21:39.284238  324649 docker.go:234] disabling docker service ...
	I1010 18:21:39.284295  324649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:21:39.303174  324649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:21:39.316224  324649 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:21:39.401593  324649 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:21:39.486478  324649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:21:39.499671  324649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:21:39.515336  324649 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1010 18:21:39.515393  324649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.526705  324649 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1010 18:21:39.526768  324649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.536968  324649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.546772  324649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.556927  324649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:21:39.566265  324649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.576240  324649 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.591514  324649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:39.601231  324649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:21:39.609546  324649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:21:39.617339  324649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:21:39.697520  324649 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:21:39.833447  324649 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:21:39.833510  324649 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:21:39.837650  324649 start.go:563] Will wait 60s for crictl version
	I1010 18:21:39.837706  324649 ssh_runner.go:195] Run: which crictl
	I1010 18:21:39.841778  324649 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1010 18:21:39.866403  324649 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1010 18:21:39.866489  324649 ssh_runner.go:195] Run: crio --version
	I1010 18:21:39.894594  324649 ssh_runner.go:195] Run: crio --version
	I1010 18:21:39.923363  324649 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1010 18:21:39.924491  324649 cli_runner.go:164] Run: docker network inspect newest-cni-121129 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 18:21:39.942921  324649 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1010 18:21:39.947042  324649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:21:39.959308  324649 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1010 18:21:36.669200  325699 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-821769" ...
	I1010 18:21:36.669266  325699 cli_runner.go:164] Run: docker start default-k8s-diff-port-821769
	I1010 18:21:36.950209  325699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:21:36.973712  325699 kic.go:430] container "default-k8s-diff-port-821769" state is running.
	I1010 18:21:36.974205  325699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-821769
	I1010 18:21:36.999384  325699 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/config.json ...
	I1010 18:21:36.999678  325699 machine.go:93] provisionDockerMachine start ...
	I1010 18:21:36.999832  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:37.025140  325699 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:37.025476  325699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1010 18:21:37.025494  325699 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 18:21:37.026335  325699 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37242->127.0.0.1:33128: read: connection reset by peer
	I1010 18:21:40.162873  325699 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-821769
	
	I1010 18:21:40.162901  325699 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-821769"
	I1010 18:21:40.162999  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:40.189150  325699 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:40.189443  325699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1010 18:21:40.189466  325699 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-821769 && echo "default-k8s-diff-port-821769" | sudo tee /etc/hostname
	I1010 18:21:40.331478  325699 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-821769
	
	I1010 18:21:40.331570  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:40.349460  325699 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:40.349752  325699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1010 18:21:40.349789  325699 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-821769' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-821769/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-821769' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:21:40.495960  325699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:21:40.495988  325699 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-5815/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-5815/.minikube}
	I1010 18:21:40.496005  325699 ubuntu.go:190] setting up certificates
	I1010 18:21:40.496013  325699 provision.go:84] configureAuth start
	I1010 18:21:40.496106  325699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-821769
	I1010 18:21:40.515849  325699 provision.go:143] copyHostCerts
	I1010 18:21:40.515918  325699 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem, removing ...
	I1010 18:21:40.515937  325699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem
	I1010 18:21:40.516030  325699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem (1123 bytes)
	I1010 18:21:40.516170  325699 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem, removing ...
	I1010 18:21:40.516190  325699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem
	I1010 18:21:40.516240  325699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem (1675 bytes)
	I1010 18:21:40.516317  325699 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem, removing ...
	I1010 18:21:40.516328  325699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem
	I1010 18:21:40.516365  325699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem (1082 bytes)
	I1010 18:21:40.516437  325699 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-821769 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-821769 localhost minikube]
	I1010 18:21:40.621000  325699 provision.go:177] copyRemoteCerts
	I1010 18:21:40.621136  325699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:21:40.621199  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:40.639539  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:40.738484  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1010 18:21:40.758076  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1010 18:21:40.777450  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1010 18:21:40.796411  325699 provision.go:87] duration metric: took 300.38696ms to configureAuth
	I1010 18:21:40.796439  325699 ubuntu.go:206] setting minikube options for container-runtime
	I1010 18:21:40.796606  325699 config.go:182] Loaded profile config "default-k8s-diff-port-821769": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:21:40.796693  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:40.814633  325699 main.go:141] libmachine: Using SSH client type: native
	I1010 18:21:40.814851  325699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1010 18:21:40.814874  325699 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:21:41.126788  325699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:21:41.126818  325699 machine.go:96] duration metric: took 4.127117296s to provisionDockerMachine
	I1010 18:21:41.126831  325699 start.go:293] postStartSetup for "default-k8s-diff-port-821769" (driver="docker")
	I1010 18:21:41.126845  325699 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:21:41.126909  325699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:21:41.126956  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:41.146094  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:41.244401  325699 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:21:41.247953  325699 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1010 18:21:41.247984  325699 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1010 18:21:41.247996  325699 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/addons for local assets ...
	I1010 18:21:41.248060  325699 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/files for local assets ...
	I1010 18:21:41.248175  325699 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem -> 93542.pem in /etc/ssl/certs
	I1010 18:21:41.248266  325699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:21:41.256669  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:21:41.275845  325699 start.go:296] duration metric: took 149.001179ms for postStartSetup
	I1010 18:21:41.275913  325699 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1010 18:21:41.275950  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:41.294158  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:41.387292  325699 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1010 18:21:41.391952  325699 fix.go:56] duration metric: took 4.745025215s for fixHost
	I1010 18:21:41.391980  325699 start.go:83] releasing machines lock for "default-k8s-diff-port-821769", held for 4.745085816s
	I1010 18:21:41.392032  325699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-821769
	I1010 18:21:41.410356  325699 ssh_runner.go:195] Run: cat /version.json
	I1010 18:21:41.410400  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:41.410462  325699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:21:41.410537  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:41.428673  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:41.429174  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:39.960290  324649 kubeadm.go:883] updating cluster {Name:newest-cni-121129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 18:21:39.960390  324649 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:21:39.960442  324649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:21:39.991643  324649 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:21:39.991664  324649 crio.go:433] Images already preloaded, skipping extraction
	I1010 18:21:39.991716  324649 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:21:40.018213  324649 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:21:40.018233  324649 cache_images.go:85] Images are preloaded, skipping loading
	I1010 18:21:40.018240  324649 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1010 18:21:40.018331  324649 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-121129 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:21:40.018427  324649 ssh_runner.go:195] Run: crio config
	I1010 18:21:40.065330  324649 cni.go:84] Creating CNI manager for ""
	I1010 18:21:40.065358  324649 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:21:40.065375  324649 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1010 18:21:40.065395  324649 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-121129 NodeName:newest-cni-121129 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 18:21:40.065508  324649 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-121129"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 18:21:40.065561  324649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1010 18:21:40.074911  324649 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 18:21:40.074973  324649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 18:21:40.083566  324649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1010 18:21:40.097986  324649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:21:40.114282  324649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1010 18:21:40.128847  324649 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1010 18:21:40.132698  324649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:21:40.143413  324649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:21:40.227094  324649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:21:40.249628  324649 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129 for IP: 192.168.85.2
	I1010 18:21:40.249652  324649 certs.go:195] generating shared ca certs ...
	I1010 18:21:40.249678  324649 certs.go:227] acquiring lock for ca certs: {Name:mkd2ebf34e0d6ec3a7809bed8325fdc7fe2fcc31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:40.249833  324649 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key
	I1010 18:21:40.249870  324649 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key
	I1010 18:21:40.249880  324649 certs.go:257] generating profile certs ...
	I1010 18:21:40.249964  324649 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.key
	I1010 18:21:40.249986  324649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.crt with IP's: []
	I1010 18:21:40.601463  324649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.crt ...
	I1010 18:21:40.601490  324649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.crt: {Name:mk644ed6d675dd6a538c02d2c8e614b2a15b3122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:40.601663  324649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.key ...
	I1010 18:21:40.601672  324649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.key: {Name:mk914b6f6ffa18eaa800e7d301f088828f088f03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:40.601751  324649 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key.89f266b7
	I1010 18:21:40.601767  324649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.crt.89f266b7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1010 18:21:41.352224  324649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.crt.89f266b7 ...
	I1010 18:21:41.352248  324649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.crt.89f266b7: {Name:mkdef5060ad4b077648f6c85a78fa3bbbb5e73d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:41.352404  324649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key.89f266b7 ...
	I1010 18:21:41.352424  324649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key.89f266b7: {Name:mkfea0f84cddcdc4e3c69624946502bcf937c477 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:41.352501  324649 certs.go:382] copying /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.crt.89f266b7 -> /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.crt
	I1010 18:21:41.352570  324649 certs.go:386] copying /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key.89f266b7 -> /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key
	I1010 18:21:41.352640  324649 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.key
	I1010 18:21:41.352657  324649 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.crt with IP's: []
	I1010 18:21:41.590793  325699 ssh_runner.go:195] Run: systemctl --version
	I1010 18:21:41.597352  325699 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:21:41.632391  325699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:21:41.637267  325699 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:21:41.637329  325699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:21:41.646619  325699 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1010 18:21:41.646643  325699 start.go:495] detecting cgroup driver to use...
	I1010 18:21:41.646672  325699 detect.go:190] detected "systemd" cgroup driver on host os
	I1010 18:21:41.646707  325699 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:21:41.662702  325699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:21:41.675945  325699 docker.go:218] disabling cri-docker service (if available) ...
	I1010 18:21:41.675998  325699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:21:41.690577  325699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:21:41.703139  325699 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:21:41.785080  325699 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:21:41.887442  325699 docker.go:234] disabling docker service ...
	I1010 18:21:41.887510  325699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:21:41.902511  325699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:21:41.915792  325699 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:21:41.998153  325699 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:21:42.082320  325699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:21:42.095388  325699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:21:42.110606  325699 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1010 18:21:42.110668  325699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.120566  325699 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1010 18:21:42.120611  325699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.130445  325699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.140220  325699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.149997  325699 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:21:42.159172  325699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.168739  325699 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.177930  325699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:21:42.187922  325699 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:21:42.196256  325699 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:21:42.204604  325699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:21:42.288532  325699 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:21:42.425073  325699 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:21:42.425143  325699 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:21:42.429651  325699 start.go:563] Will wait 60s for crictl version
	I1010 18:21:42.429707  325699 ssh_runner.go:195] Run: which crictl
	I1010 18:21:42.433310  325699 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1010 18:21:42.459422  325699 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1010 18:21:42.459511  325699 ssh_runner.go:195] Run: crio --version
	I1010 18:21:42.491064  325699 ssh_runner.go:195] Run: crio --version
	I1010 18:21:42.523177  325699 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1010 18:21:42.524273  325699 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-821769 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 18:21:42.544600  325699 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1010 18:21:42.549336  325699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:21:42.561250  325699 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-821769 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-821769 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 18:21:42.561363  325699 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:21:42.561407  325699 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:21:42.595069  325699 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:21:42.595092  325699 crio.go:433] Images already preloaded, skipping extraction
	I1010 18:21:42.595137  325699 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:21:42.621683  325699 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:21:42.621708  325699 cache_images.go:85] Images are preloaded, skipping loading
	I1010 18:21:42.621718  325699 kubeadm.go:934] updating node { 192.168.103.2 8444 v1.34.1 crio true true} ...
	I1010 18:21:42.621877  325699 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-821769 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-821769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:21:42.621955  325699 ssh_runner.go:195] Run: crio config
	I1010 18:21:42.670696  325699 cni.go:84] Creating CNI manager for ""
	I1010 18:21:42.670714  325699 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:21:42.670729  325699 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1010 18:21:42.670749  325699 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-821769 NodeName:default-k8s-diff-port-821769 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 18:21:42.670867  325699 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-821769"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 18:21:42.670920  325699 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1010 18:21:42.679913  325699 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 18:21:42.679968  325699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 18:21:42.688618  325699 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1010 18:21:42.703331  325699 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:21:42.718311  325699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2227 bytes)
	I1010 18:21:42.732968  325699 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1010 18:21:42.736868  325699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:21:42.747553  325699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:21:42.829086  325699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:21:42.858574  325699 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769 for IP: 192.168.103.2
	I1010 18:21:42.858598  325699 certs.go:195] generating shared ca certs ...
	I1010 18:21:42.858623  325699 certs.go:227] acquiring lock for ca certs: {Name:mkd2ebf34e0d6ec3a7809bed8325fdc7fe2fcc31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:42.858780  325699 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key
	I1010 18:21:42.858834  325699 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key
	I1010 18:21:42.858849  325699 certs.go:257] generating profile certs ...
	I1010 18:21:42.858967  325699 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/client.key
	I1010 18:21:42.859085  325699 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/apiserver.key.10168654
	I1010 18:21:42.859140  325699 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/proxy-client.key
	I1010 18:21:42.859285  325699 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem (1338 bytes)
	W1010 18:21:42.859321  325699 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354_empty.pem, impossibly tiny 0 bytes
	I1010 18:21:42.859336  325699 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem (1675 bytes)
	I1010 18:21:42.859370  325699 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem (1082 bytes)
	I1010 18:21:42.859399  325699 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:21:42.859429  325699 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem (1675 bytes)
	I1010 18:21:42.859481  325699 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:21:42.860204  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:21:42.882094  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:21:42.903468  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:21:42.925737  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1010 18:21:42.953372  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1010 18:21:42.973504  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 18:21:42.992899  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:21:43.011728  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/default-k8s-diff-port-821769/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 18:21:43.030624  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem --> /usr/share/ca-certificates/9354.pem (1338 bytes)
	I1010 18:21:43.049802  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /usr/share/ca-certificates/93542.pem (1708 bytes)
	I1010 18:21:43.070120  325699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:21:43.090039  325699 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 18:21:43.103785  325699 ssh_runner.go:195] Run: openssl version
	I1010 18:21:43.110111  325699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93542.pem && ln -fs /usr/share/ca-certificates/93542.pem /etc/ssl/certs/93542.pem"
	I1010 18:21:43.118950  325699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93542.pem
	I1010 18:21:43.122454  325699 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 17:36 /usr/share/ca-certificates/93542.pem
	I1010 18:21:43.122512  325699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93542.pem
	I1010 18:21:43.157901  325699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93542.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:21:43.167111  325699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:21:43.176248  325699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:21:43.179836  325699 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:21:43.179900  325699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:21:43.216894  325699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:21:43.226252  325699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9354.pem && ln -fs /usr/share/ca-certificates/9354.pem /etc/ssl/certs/9354.pem"
	I1010 18:21:43.235390  325699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9354.pem
	I1010 18:21:43.239321  325699 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 17:36 /usr/share/ca-certificates/9354.pem
	I1010 18:21:43.239380  325699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9354.pem
	I1010 18:21:43.273487  325699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9354.pem /etc/ssl/certs/51391683.0"
	I1010 18:21:43.282570  325699 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:21:43.286433  325699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 18:21:43.320357  325699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 18:21:43.361223  325699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 18:21:43.409478  325699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 18:21:43.456529  325699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 18:21:43.512033  325699 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 18:21:43.568244  325699 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-821769 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-821769 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:21:43.568348  325699 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 18:21:43.568440  325699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 18:21:43.611528  325699 cri.go:89] found id: "1352ca41b0e7626fbf6ee43638506dfab18bd157572e9128f411ac1c5ae54538"
	I1010 18:21:43.611555  325699 cri.go:89] found id: "2aeadcb9e03cc805af5eff4f1b521299f31e4d618387d10eef543b4e95787f70"
	I1010 18:21:43.611560  325699 cri.go:89] found id: "6c6e229b2a8311cf4d60aad6c602e02c2923b5ba2309e536076e40579456e8e2"
	I1010 18:21:43.611565  325699 cri.go:89] found id: "c3f03c923ad6830325d9888fdf2ad9de25ac73298e25b5812f72951d65af2eec"
	I1010 18:21:43.611569  325699 cri.go:89] found id: ""
	I1010 18:21:43.611612  325699 ssh_runner.go:195] Run: sudo runc list -f json
	W1010 18:21:43.627173  325699 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:21:43Z" level=error msg="open /run/runc: no such file or directory"
	I1010 18:21:43.627256  325699 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 18:21:43.638581  325699 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1010 18:21:43.638602  325699 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1010 18:21:43.638652  325699 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 18:21:43.650423  325699 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 18:21:43.651568  325699 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-821769" does not appear in /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:21:43.652341  325699 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-5815/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-821769" cluster setting kubeconfig missing "default-k8s-diff-port-821769" context setting]
	I1010 18:21:43.653567  325699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:43.655682  325699 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 18:21:43.667709  325699 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1010 18:21:43.667743  325699 kubeadm.go:601] duration metric: took 29.134937ms to restartPrimaryControlPlane
	I1010 18:21:43.667753  325699 kubeadm.go:402] duration metric: took 99.518506ms to StartCluster
	I1010 18:21:43.667770  325699 settings.go:142] acquiring lock: {Name:mk32701f7c6313a55b8740f0862889585a36e8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:43.667845  325699 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:21:43.669889  325699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:43.670281  325699 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:21:43.670407  325699 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 18:21:43.670513  325699 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-821769"
	I1010 18:21:43.670534  325699 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-821769"
	W1010 18:21:43.670546  325699 addons.go:247] addon storage-provisioner should already be in state true
	I1010 18:21:43.670545  325699 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-821769"
	I1010 18:21:43.670572  325699 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-821769"
	I1010 18:21:43.670580  325699 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-821769"
	W1010 18:21:43.670582  325699 addons.go:247] addon dashboard should already be in state true
	I1010 18:21:43.670595  325699 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-821769"
	I1010 18:21:43.670677  325699 host.go:66] Checking if "default-k8s-diff-port-821769" exists ...
	I1010 18:21:43.670572  325699 host.go:66] Checking if "default-k8s-diff-port-821769" exists ...
	I1010 18:21:43.670904  325699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:21:43.671151  325699 config.go:182] Loaded profile config "default-k8s-diff-port-821769": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:21:43.671356  325699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:21:43.672130  325699 out.go:179] * Verifying Kubernetes components...
	I1010 18:21:43.672709  325699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:21:43.673037  325699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:21:43.701170  325699 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:21:43.703152  325699 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:21:43.703189  325699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 18:21:43.703293  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:43.709767  325699 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-821769"
	W1010 18:21:43.709840  325699 addons.go:247] addon default-storageclass should already be in state true
	I1010 18:21:43.709890  325699 host.go:66] Checking if "default-k8s-diff-port-821769" exists ...
	I1010 18:21:43.710622  325699 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:21:43.711556  325699 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1010 18:21:43.715168  325699 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1010 18:21:43.716093  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1010 18:21:43.716116  325699 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1010 18:21:43.716174  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:43.745595  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:43.754680  325699 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 18:21:43.754766  325699 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 18:21:43.754853  325699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:21:43.766642  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:43.784887  325699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:21:43.856990  325699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:21:43.873309  325699 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-821769" to be "Ready" ...
	I1010 18:21:43.936166  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1010 18:21:43.936223  325699 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1010 18:21:43.955509  325699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 18:21:43.956951  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1010 18:21:43.956971  325699 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1010 18:21:43.985048  325699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:21:43.985772  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1010 18:21:43.986042  325699 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1010 18:21:44.008589  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1010 18:21:44.008614  325699 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1010 18:21:44.034035  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1010 18:21:44.034165  325699 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1010 18:21:44.061163  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1010 18:21:44.061253  325699 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1010 18:21:44.112492  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1010 18:21:44.112518  325699 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1010 18:21:44.149803  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1010 18:21:44.149896  325699 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1010 18:21:44.172145  325699 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1010 18:21:44.172172  325699 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1010 18:21:44.191656  325699 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1010 18:21:45.474823  325699 node_ready.go:49] node "default-k8s-diff-port-821769" is "Ready"
	I1010 18:21:45.474857  325699 node_ready.go:38] duration metric: took 1.601510652s for node "default-k8s-diff-port-821769" to be "Ready" ...
	I1010 18:21:45.474873  325699 api_server.go:52] waiting for apiserver process to appear ...
	I1010 18:21:45.474923  325699 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 18:21:45.570164  325699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.614616389s)
	I1010 18:21:46.101989  325699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.116012627s)
	I1010 18:21:46.102157  325699 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.910456027s)
	I1010 18:21:46.102189  325699 api_server.go:72] duration metric: took 2.431862039s to wait for apiserver process to appear ...
	I1010 18:21:46.102205  325699 api_server.go:88] waiting for apiserver healthz status ...
	I1010 18:21:46.102226  325699 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1010 18:21:46.103626  325699 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-821769 addons enable metrics-server
	
	I1010 18:21:46.104750  325699 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1010 18:21:46.105672  325699 addons.go:514] duration metric: took 2.435260331s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1010 18:21:46.106650  325699 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 18:21:46.106667  325699 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 18:21:41.799013  324649 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.crt ...
	I1010 18:21:41.799039  324649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.crt: {Name:mk0669ceb9e9a4f760f7827d6d6abc6856417c2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:41.799218  324649 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.key ...
	I1010 18:21:41.799235  324649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.key: {Name:mk52379a2bae9262f9822bb1871c3d07af332ca7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:41.799416  324649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem (1338 bytes)
	W1010 18:21:41.799450  324649 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354_empty.pem, impossibly tiny 0 bytes
	I1010 18:21:41.799460  324649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem (1675 bytes)
	I1010 18:21:41.799483  324649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem (1082 bytes)
	I1010 18:21:41.799510  324649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:21:41.799531  324649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem (1675 bytes)
	I1010 18:21:41.799566  324649 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:21:41.800117  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:21:41.825626  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:21:41.848227  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:21:41.869328  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1010 18:21:41.890966  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1010 18:21:41.911349  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 18:21:41.931728  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:21:41.958462  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 18:21:41.979538  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:21:42.001499  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem --> /usr/share/ca-certificates/9354.pem (1338 bytes)
	I1010 18:21:42.024216  324649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /usr/share/ca-certificates/93542.pem (1708 bytes)
	I1010 18:21:42.046423  324649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 18:21:42.061125  324649 ssh_runner.go:195] Run: openssl version
	I1010 18:21:42.067212  324649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:21:42.076458  324649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:21:42.080715  324649 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:21:42.080767  324649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:21:42.116344  324649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:21:42.126106  324649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9354.pem && ln -fs /usr/share/ca-certificates/9354.pem /etc/ssl/certs/9354.pem"
	I1010 18:21:42.135627  324649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9354.pem
	I1010 18:21:42.139483  324649 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 17:36 /usr/share/ca-certificates/9354.pem
	I1010 18:21:42.139535  324649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9354.pem
	I1010 18:21:42.177476  324649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9354.pem /etc/ssl/certs/51391683.0"
	I1010 18:21:42.187546  324649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93542.pem && ln -fs /usr/share/ca-certificates/93542.pem /etc/ssl/certs/93542.pem"
	I1010 18:21:42.196815  324649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93542.pem
	I1010 18:21:42.200662  324649 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 17:36 /usr/share/ca-certificates/93542.pem
	I1010 18:21:42.200712  324649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93542.pem
	I1010 18:21:42.244485  324649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93542.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:21:42.254296  324649 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:21:42.258045  324649 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1010 18:21:42.258109  324649 kubeadm.go:400] StartCluster: {Name:newest-cni-121129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:21:42.258208  324649 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 18:21:42.258261  324649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 18:21:42.287523  324649 cri.go:89] found id: ""
	I1010 18:21:42.287614  324649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 18:21:42.296824  324649 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1010 18:21:42.306175  324649 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1010 18:21:42.306236  324649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1010 18:21:42.314702  324649 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1010 18:21:42.314726  324649 kubeadm.go:157] found existing configuration files:
	
	I1010 18:21:42.314769  324649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1010 18:21:42.323157  324649 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1010 18:21:42.323223  324649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1010 18:21:42.331276  324649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1010 18:21:42.340535  324649 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1010 18:21:42.340585  324649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1010 18:21:42.349218  324649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1010 18:21:42.357935  324649 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1010 18:21:42.357997  324649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1010 18:21:42.366285  324649 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1010 18:21:42.374774  324649 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1010 18:21:42.374815  324649 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1010 18:21:42.383131  324649 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1010 18:21:42.446332  324649 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1010 18:21:42.516841  324649 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1010 18:21:46.602536  325699 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1010 18:21:46.609396  325699 api_server.go:279] https://192.168.103.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 18:21:46.609424  325699 api_server.go:103] status: https://192.168.103.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 18:21:47.102606  325699 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8444/healthz ...
	I1010 18:21:47.109427  325699 api_server.go:279] https://192.168.103.2:8444/healthz returned 200:
	ok
	I1010 18:21:47.110803  325699 api_server.go:141] control plane version: v1.34.1
	I1010 18:21:47.110830  325699 api_server.go:131] duration metric: took 1.008616848s to wait for apiserver health ...
	I1010 18:21:47.110841  325699 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 18:21:47.116316  325699 system_pods.go:59] 8 kube-system pods found
	I1010 18:21:47.116365  325699 system_pods.go:61] "coredns-66bc5c9577-wrz5v" [7a6485d8-d7c2-4cdc-a015-68b7754aa396] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:21:47.116444  325699 system_pods.go:61] "etcd-default-k8s-diff-port-821769" [b5edacc6-aaa2-4ee9-b0b1-330ce9248047] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 18:21:47.116486  325699 system_pods.go:61] "kindnet-4w475" [f4b100ab-44a4-49d1-bae7-d7dbdd293a80] Running
	I1010 18:21:47.116495  325699 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-821769" [d5671f82-586b-4ce8-954c-d0779d0759ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 18:21:47.116503  325699 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-821769" [04b0efc5-436e-4138-bbbc-ecb536f5118e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 18:21:47.116509  325699 system_pods.go:61] "kube-proxy-h2mzf" [0598db95-c0fc-49b8-a15b-26e4f96ed49c] Running
	I1010 18:21:47.116528  325699 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-821769" [e99518f9-57ed-46f5-b338-ba281829307d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 18:21:47.116534  325699 system_pods.go:61] "storage-provisioner" [63ba31a4-0bea-47b8-92f4-453fa7d83aea] Running
	I1010 18:21:47.116575  325699 system_pods.go:74] duration metric: took 5.693503ms to wait for pod list to return data ...
	I1010 18:21:47.116596  325699 default_sa.go:34] waiting for default service account to be created ...
	I1010 18:21:47.119751  325699 default_sa.go:45] found service account: "default"
	I1010 18:21:47.119766  325699 default_sa.go:55] duration metric: took 3.155339ms for default service account to be created ...
	I1010 18:21:47.119774  325699 system_pods.go:116] waiting for k8s-apps to be running ...
	I1010 18:21:47.123540  325699 system_pods.go:86] 8 kube-system pods found
	I1010 18:21:47.123568  325699 system_pods.go:89] "coredns-66bc5c9577-wrz5v" [7a6485d8-d7c2-4cdc-a015-68b7754aa396] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1010 18:21:47.123578  325699 system_pods.go:89] "etcd-default-k8s-diff-port-821769" [b5edacc6-aaa2-4ee9-b0b1-330ce9248047] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 18:21:47.123585  325699 system_pods.go:89] "kindnet-4w475" [f4b100ab-44a4-49d1-bae7-d7dbdd293a80] Running
	I1010 18:21:47.123597  325699 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-821769" [d5671f82-586b-4ce8-954c-d0779d0759ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 18:21:47.123606  325699 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-821769" [04b0efc5-436e-4138-bbbc-ecb536f5118e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 18:21:47.123612  325699 system_pods.go:89] "kube-proxy-h2mzf" [0598db95-c0fc-49b8-a15b-26e4f96ed49c] Running
	I1010 18:21:47.123619  325699 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-821769" [e99518f9-57ed-46f5-b338-ba281829307d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 18:21:47.123624  325699 system_pods.go:89] "storage-provisioner" [63ba31a4-0bea-47b8-92f4-453fa7d83aea] Running
	I1010 18:21:47.123632  325699 system_pods.go:126] duration metric: took 3.852363ms to wait for k8s-apps to be running ...
	I1010 18:21:47.123639  325699 system_svc.go:44] waiting for kubelet service to be running ....
	I1010 18:21:47.123691  325699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:21:47.140804  325699 system_svc.go:56] duration metric: took 17.156579ms WaitForService to wait for kubelet
	I1010 18:21:47.140832  325699 kubeadm.go:586] duration metric: took 3.47051062s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1010 18:21:47.140854  325699 node_conditions.go:102] verifying NodePressure condition ...
	I1010 18:21:47.143989  325699 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1010 18:21:47.144014  325699 node_conditions.go:123] node cpu capacity is 8
	I1010 18:21:47.144037  325699 node_conditions.go:105] duration metric: took 3.169915ms to run NodePressure ...
	I1010 18:21:47.144073  325699 start.go:241] waiting for startup goroutines ...
	I1010 18:21:47.144085  325699 start.go:246] waiting for cluster config update ...
	I1010 18:21:47.144097  325699 start.go:255] writing updated cluster config ...
	I1010 18:21:47.144428  325699 ssh_runner.go:195] Run: rm -f paused
	I1010 18:21:47.148485  325699 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 18:21:47.152684  325699 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wrz5v" in "kube-system" namespace to be "Ready" or be gone ...
	W1010 18:21:49.159234  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	W1010 18:21:51.159917  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	I1010 18:21:54.671330  324649 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1010 18:21:54.671403  324649 kubeadm.go:318] [preflight] Running pre-flight checks
	I1010 18:21:54.671553  324649 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1010 18:21:54.671651  324649 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1010 18:21:54.671725  324649 kubeadm.go:318] OS: Linux
	I1010 18:21:54.671814  324649 kubeadm.go:318] CGROUPS_CPU: enabled
	I1010 18:21:54.671884  324649 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1010 18:21:54.671966  324649 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1010 18:21:54.672038  324649 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1010 18:21:54.672209  324649 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1010 18:21:54.672282  324649 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1010 18:21:54.672355  324649 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1010 18:21:54.672420  324649 kubeadm.go:318] CGROUPS_IO: enabled
	I1010 18:21:54.672526  324649 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1010 18:21:54.672656  324649 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1010 18:21:54.672787  324649 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1010 18:21:54.672879  324649 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1010 18:21:54.674338  324649 out.go:252]   - Generating certificates and keys ...
	I1010 18:21:54.674468  324649 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1010 18:21:54.674566  324649 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1010 18:21:54.674663  324649 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1010 18:21:54.674748  324649 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1010 18:21:54.674944  324649 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1010 18:21:54.675069  324649 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1010 18:21:54.675191  324649 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1010 18:21:54.675392  324649 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-121129] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1010 18:21:54.675505  324649 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1010 18:21:54.675683  324649 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-121129] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1010 18:21:54.675777  324649 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1010 18:21:54.675867  324649 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1010 18:21:54.675944  324649 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1010 18:21:54.676036  324649 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1010 18:21:54.676116  324649 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1010 18:21:54.676189  324649 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1010 18:21:54.676273  324649 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1010 18:21:54.676384  324649 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1010 18:21:54.676466  324649 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1010 18:21:54.676654  324649 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1010 18:21:54.676759  324649 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1010 18:21:54.677760  324649 out.go:252]   - Booting up control plane ...
	I1010 18:21:54.677869  324649 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1010 18:21:54.677994  324649 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1010 18:21:54.678142  324649 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1010 18:21:54.678297  324649 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1010 18:21:54.678437  324649 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1010 18:21:54.678603  324649 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1010 18:21:54.678727  324649 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1010 18:21:54.678793  324649 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1010 18:21:54.678974  324649 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1010 18:21:54.679147  324649 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1010 18:21:54.679247  324649 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001565534s
	I1010 18:21:54.679377  324649 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1010 18:21:54.679492  324649 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1010 18:21:54.679630  324649 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1010 18:21:54.679737  324649 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1010 18:21:54.679825  324649 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.549433347s
	I1010 18:21:54.680002  324649 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.970085456s
	I1010 18:21:54.680148  324649 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.501809858s
	I1010 18:21:54.680328  324649 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1010 18:21:54.680766  324649 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1010 18:21:54.680863  324649 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1010 18:21:54.681187  324649 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-121129 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1010 18:21:54.681288  324649 kubeadm.go:318] [bootstrap-token] Using token: bxrz3r.dhj894gjckgiifa5
	I1010 18:21:54.682555  324649 out.go:252]   - Configuring RBAC rules ...
	I1010 18:21:54.682720  324649 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1010 18:21:54.682861  324649 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1010 18:21:54.683223  324649 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1010 18:21:54.683374  324649 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1010 18:21:54.683531  324649 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1010 18:21:54.683652  324649 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1010 18:21:54.683817  324649 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1010 18:21:54.683858  324649 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1010 18:21:54.683916  324649 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1010 18:21:54.683927  324649 kubeadm.go:318] 
	I1010 18:21:54.684010  324649 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1010 18:21:54.684019  324649 kubeadm.go:318] 
	I1010 18:21:54.684256  324649 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1010 18:21:54.684266  324649 kubeadm.go:318] 
	I1010 18:21:54.684294  324649 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1010 18:21:54.684400  324649 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1010 18:21:54.684473  324649 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1010 18:21:54.684485  324649 kubeadm.go:318] 
	I1010 18:21:54.684571  324649 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1010 18:21:54.684582  324649 kubeadm.go:318] 
	I1010 18:21:54.684645  324649 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1010 18:21:54.684665  324649 kubeadm.go:318] 
	I1010 18:21:54.684723  324649 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1010 18:21:54.684808  324649 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1010 18:21:54.684890  324649 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1010 18:21:54.684903  324649 kubeadm.go:318] 
	I1010 18:21:54.685004  324649 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1010 18:21:54.685090  324649 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1010 18:21:54.685097  324649 kubeadm.go:318] 
	I1010 18:21:54.685210  324649 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token bxrz3r.dhj894gjckgiifa5 \
	I1010 18:21:54.685318  324649 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:08dcb68c3233bd2646103f50182dc3a0cc6156f6b69cb66c341f613324bcc71f \
	I1010 18:21:54.685336  324649 kubeadm.go:318] 	--control-plane 
	I1010 18:21:54.685339  324649 kubeadm.go:318] 
	I1010 18:21:54.685409  324649 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1010 18:21:54.685413  324649 kubeadm.go:318] 
	I1010 18:21:54.685480  324649 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token bxrz3r.dhj894gjckgiifa5 \
	I1010 18:21:54.685663  324649 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:08dcb68c3233bd2646103f50182dc3a0cc6156f6b69cb66c341f613324bcc71f 
	I1010 18:21:54.685670  324649 cni.go:84] Creating CNI manager for ""
	I1010 18:21:54.685676  324649 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:21:54.688192  324649 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1010 18:21:53.160278  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	W1010 18:21:55.661387  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	I1010 18:21:54.689986  324649 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1010 18:21:54.695592  324649 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1010 18:21:54.695609  324649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1010 18:21:54.715333  324649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1010 18:21:55.021556  324649 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1010 18:21:55.021641  324649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:21:55.021654  324649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-121129 minikube.k8s.io/updated_at=2025_10_10T18_21_55_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=ad692bf4ab89f0e135b80e730ae25010479ecc46 minikube.k8s.io/name=newest-cni-121129 minikube.k8s.io/primary=true
	I1010 18:21:55.132177  324649 ops.go:34] apiserver oom_adj: -16
	I1010 18:21:55.132428  324649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:21:55.632549  324649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:21:56.132691  324649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:21:56.632497  324649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:21:57.132338  324649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:21:57.632202  324649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:21:58.132595  324649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:21:58.632214  324649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:21:59.132188  324649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:21:59.632608  324649 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1010 18:21:59.703255  324649 kubeadm.go:1113] duration metric: took 4.681683822s to wait for elevateKubeSystemPrivileges
	I1010 18:21:59.703293  324649 kubeadm.go:402] duration metric: took 17.445186244s to StartCluster
	I1010 18:21:59.703314  324649 settings.go:142] acquiring lock: {Name:mk32701f7c6313a55b8740f0862889585a36e8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:59.703393  324649 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:21:59.704391  324649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:21:59.704645  324649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1010 18:21:59.704650  324649 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:21:59.704717  324649 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 18:21:59.704847  324649 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-121129"
	I1010 18:21:59.704869  324649 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-121129"
	I1010 18:21:59.704877  324649 addons.go:69] Setting default-storageclass=true in profile "newest-cni-121129"
	I1010 18:21:59.704902  324649 host.go:66] Checking if "newest-cni-121129" exists ...
	I1010 18:21:59.704908  324649 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-121129"
	I1010 18:21:59.704901  324649 config.go:182] Loaded profile config "newest-cni-121129": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:21:59.705292  324649 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:21:59.705390  324649 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:21:59.706192  324649 out.go:179] * Verifying Kubernetes components...
	I1010 18:21:59.707300  324649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:21:59.729155  324649 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:21:59.730273  324649 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:21:59.730296  324649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 18:21:59.730346  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:59.730444  324649 addons.go:238] Setting addon default-storageclass=true in "newest-cni-121129"
	I1010 18:21:59.730487  324649 host.go:66] Checking if "newest-cni-121129" exists ...
	I1010 18:21:59.730939  324649 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:21:59.762128  324649 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 18:21:59.762153  324649 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 18:21:59.762211  324649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:21:59.762525  324649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:21:59.787212  324649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:21:59.803997  324649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1010 18:21:59.856512  324649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:21:59.950363  324649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 18:21:59.973478  324649 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1010 18:21:59.974747  324649 api_server.go:52] waiting for apiserver process to appear ...
	I1010 18:21:59.974809  324649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 18:21:59.980086  324649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:22:00.072877  324649 api_server.go:72] duration metric: took 368.198343ms to wait for apiserver process to appear ...
	I1010 18:22:00.072910  324649 api_server.go:88] waiting for apiserver healthz status ...
	I1010 18:22:00.072930  324649 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1010 18:22:00.077811  324649 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1010 18:22:00.079080  324649 api_server.go:141] control plane version: v1.34.1
	I1010 18:22:00.079109  324649 api_server.go:131] duration metric: took 6.190813ms to wait for apiserver health ...
	I1010 18:22:00.079120  324649 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 18:22:00.082433  324649 system_pods.go:59] 7 kube-system pods found
	I1010 18:22:00.082466  324649 system_pods.go:61] "coredns-66bc5c9577-bbxwj" [54b0d9c6-555f-476b-90d2-aca531478020] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1010 18:22:00.082478  324649 system_pods.go:61] "etcd-newest-cni-121129" [24b69503-efe0-4418-b656-58b90f7d7420] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 18:22:00.082490  324649 system_pods.go:61] "kindnet-9ml5n" [22d3f2b7-d65b-4c8e-a02f-58ead02d9794] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1010 18:22:00.082500  324649 system_pods.go:61] "kube-apiserver-newest-cni-121129" [c429c3d5-c663-453e-9d48-8eacc534ebf4] Running
	I1010 18:22:00.082511  324649 system_pods.go:61] "kube-controller-manager-newest-cni-121129" [5e35e588-6a2a-414e-aea9-4d1d8b7897dc] Running
	I1010 18:22:00.082522  324649 system_pods.go:61] "kube-proxy-sw4cj" [82e9ec15-44c0-4bfd-8b16-3862f7bb01a6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1010 18:22:00.082536  324649 system_pods.go:61] "kube-scheduler-newest-cni-121129" [55bc4998-af60-4c82-a3cc-18ccc57ede90] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 18:22:00.082548  324649 system_pods.go:74] duration metric: took 3.419712ms to wait for pod list to return data ...
	I1010 18:22:00.082561  324649 default_sa.go:34] waiting for default service account to be created ...
	I1010 18:22:00.084779  324649 default_sa.go:45] found service account: "default"
	I1010 18:22:00.084800  324649 default_sa.go:55] duration metric: took 2.230757ms for default service account to be created ...
	I1010 18:22:00.084814  324649 kubeadm.go:586] duration metric: took 380.141316ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1010 18:22:00.084839  324649 node_conditions.go:102] verifying NodePressure condition ...
	I1010 18:22:00.087222  324649 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1010 18:22:00.087243  324649 node_conditions.go:123] node cpu capacity is 8
	I1010 18:22:00.087254  324649 node_conditions.go:105] duration metric: took 2.410913ms to run NodePressure ...
	I1010 18:22:00.087264  324649 start.go:241] waiting for startup goroutines ...
	I1010 18:22:00.281906  324649 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1010 18:22:00.282953  324649 addons.go:514] duration metric: took 578.238022ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1010 18:22:00.477280  324649 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-121129" context rescaled to 1 replicas
	I1010 18:22:00.477327  324649 start.go:246] waiting for cluster config update ...
	I1010 18:22:00.477344  324649 start.go:255] writing updated cluster config ...
	I1010 18:22:00.477680  324649 ssh_runner.go:195] Run: rm -f paused
	I1010 18:22:00.528482  324649 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1010 18:22:00.530383  324649 out.go:179] * Done! kubectl is now configured to use "newest-cni-121129" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 10 18:22:00 newest-cni-121129 crio[801]: time="2025-10-10T18:22:00.117606927Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:22:00 newest-cni-121129 crio[801]: time="2025-10-10T18:22:00.11770815Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=daefee27-eec0-4b33-a375-da014b768817 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 10 18:22:00 newest-cni-121129 crio[801]: time="2025-10-10T18:22:00.119516458Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 10 18:22:00 newest-cni-121129 crio[801]: time="2025-10-10T18:22:00.120344171Z" level=info msg="Ran pod sandbox c64420ebb23b9b4cf53e9a25a33903570871e4761dac101ac1900c0c0a03df3f with infra container: kube-system/kube-proxy-sw4cj/POD" id=daefee27-eec0-4b33-a375-da014b768817 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 10 18:22:00 newest-cni-121129 crio[801]: time="2025-10-10T18:22:00.120606555Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=3cb8eca7-9f2e-4d01-86b0-747611ad67dd name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 10 18:22:00 newest-cni-121129 crio[801]: time="2025-10-10T18:22:00.121740099Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=1547067f-fa50-4571-9bdb-5e79755632c1 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:22:00 newest-cni-121129 crio[801]: time="2025-10-10T18:22:00.122223957Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 10 18:22:00 newest-cni-121129 crio[801]: time="2025-10-10T18:22:00.123030166Z" level=info msg="Ran pod sandbox 486ec45a901403547a81a3fbfc8dc2cd47530e1747e4a4cec2e545b24e1bbd4b with infra container: kube-system/kindnet-9ml5n/POD" id=3cb8eca7-9f2e-4d01-86b0-747611ad67dd name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 10 18:22:00 newest-cni-121129 crio[801]: time="2025-10-10T18:22:00.123964529Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=4bd64b70-9e4f-4cc7-898c-ad6282f8c97d name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:22:00 newest-cni-121129 crio[801]: time="2025-10-10T18:22:00.124378459Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=07383aca-8cfe-43a3-a3c5-1277df9abef1 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:22:00 newest-cni-121129 crio[801]: time="2025-10-10T18:22:00.125210783Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=029cd37e-a4fc-4d80-80ae-4aacaaf70e1f name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:22:00 newest-cni-121129 crio[801]: time="2025-10-10T18:22:00.127428581Z" level=info msg="Creating container: kube-system/kube-proxy-sw4cj/kube-proxy" id=f7991913-210a-49a7-8465-90cbed1f5596 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:22:00 newest-cni-121129 crio[801]: time="2025-10-10T18:22:00.127666291Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:22:00 newest-cni-121129 crio[801]: time="2025-10-10T18:22:00.128804562Z" level=info msg="Creating container: kube-system/kindnet-9ml5n/kindnet-cni" id=8da1dfa5-6309-446e-b7d7-ea86634a352f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:22:00 newest-cni-121129 crio[801]: time="2025-10-10T18:22:00.130108465Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:22:00 newest-cni-121129 crio[801]: time="2025-10-10T18:22:00.13272466Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:22:00 newest-cni-121129 crio[801]: time="2025-10-10T18:22:00.13387957Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:22:00 newest-cni-121129 crio[801]: time="2025-10-10T18:22:00.135816483Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:22:00 newest-cni-121129 crio[801]: time="2025-10-10T18:22:00.13623762Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:22:00 newest-cni-121129 crio[801]: time="2025-10-10T18:22:00.166308437Z" level=info msg="Created container 6005efa081ffdb1b32b804ec39eec95ebb5f7ba694f863584a790b2a671003b0: kube-system/kindnet-9ml5n/kindnet-cni" id=8da1dfa5-6309-446e-b7d7-ea86634a352f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:22:00 newest-cni-121129 crio[801]: time="2025-10-10T18:22:00.167134923Z" level=info msg="Starting container: 6005efa081ffdb1b32b804ec39eec95ebb5f7ba694f863584a790b2a671003b0" id=a98f4723-e2f9-4c6b-8c58-03a5be7f7e42 name=/runtime.v1.RuntimeService/StartContainer
	Oct 10 18:22:00 newest-cni-121129 crio[801]: time="2025-10-10T18:22:00.168701236Z" level=info msg="Created container d14511781e7f5a11b6859cfc3e501c61bb6e7e51aed67512014331d5b310734c: kube-system/kube-proxy-sw4cj/kube-proxy" id=f7991913-210a-49a7-8465-90cbed1f5596 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:22:00 newest-cni-121129 crio[801]: time="2025-10-10T18:22:00.169355299Z" level=info msg="Started container" PID=1672 containerID=6005efa081ffdb1b32b804ec39eec95ebb5f7ba694f863584a790b2a671003b0 description=kube-system/kindnet-9ml5n/kindnet-cni id=a98f4723-e2f9-4c6b-8c58-03a5be7f7e42 name=/runtime.v1.RuntimeService/StartContainer sandboxID=486ec45a901403547a81a3fbfc8dc2cd47530e1747e4a4cec2e545b24e1bbd4b
	Oct 10 18:22:00 newest-cni-121129 crio[801]: time="2025-10-10T18:22:00.169356004Z" level=info msg="Starting container: d14511781e7f5a11b6859cfc3e501c61bb6e7e51aed67512014331d5b310734c" id=0d6a6a7f-eb39-474e-826f-27261f235d6d name=/runtime.v1.RuntimeService/StartContainer
	Oct 10 18:22:00 newest-cni-121129 crio[801]: time="2025-10-10T18:22:00.172613328Z" level=info msg="Started container" PID=1673 containerID=d14511781e7f5a11b6859cfc3e501c61bb6e7e51aed67512014331d5b310734c description=kube-system/kube-proxy-sw4cj/kube-proxy id=0d6a6a7f-eb39-474e-826f-27261f235d6d name=/runtime.v1.RuntimeService/StartContainer sandboxID=c64420ebb23b9b4cf53e9a25a33903570871e4761dac101ac1900c0c0a03df3f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	6005efa081ffd       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   486ec45a90140       kindnet-9ml5n                               kube-system
	d14511781e7f5       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   1 second ago        Running             kube-proxy                0                   c64420ebb23b9       kube-proxy-sw4cj                            kube-system
	aab693007927b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   12 seconds ago      Running             kube-apiserver            0                   99db7f42c3588       kube-apiserver-newest-cni-121129            kube-system
	fc321f2bf66a1       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   12 seconds ago      Running             etcd                      0                   b6a3ca02dc99e       etcd-newest-cni-121129                      kube-system
	0e2d045322aba       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   12 seconds ago      Running             kube-controller-manager   0                   8c8d7ad78d774       kube-controller-manager-newest-cni-121129   kube-system
	a988969f8acf3       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   12 seconds ago      Running             kube-scheduler            0                   17341b598560a       kube-scheduler-newest-cni-121129            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-121129
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-121129
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad692bf4ab89f0e135b80e730ae25010479ecc46
	                    minikube.k8s.io/name=newest-cni-121129
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_10T18_21_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 10 Oct 2025 18:21:51 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-121129
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 10 Oct 2025 18:21:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 10 Oct 2025 18:21:54 +0000   Fri, 10 Oct 2025 18:21:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 10 Oct 2025 18:21:54 +0000   Fri, 10 Oct 2025 18:21:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 10 Oct 2025 18:21:54 +0000   Fri, 10 Oct 2025 18:21:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 10 Oct 2025 18:21:54 +0000   Fri, 10 Oct 2025 18:21:49 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-121129
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 6694834041ede3e9eb1b67e168e90e0c
	  System UUID:                88739fa5-1b3b-4f5e-adda-c7c74720b2ef
	  Boot ID:                    830c8438-99e6-48ba-b543-66e651cad0c8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-121129                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7s
	  kube-system                 kindnet-9ml5n                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-121129             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-controller-manager-newest-cni-121129    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-proxy-sw4cj                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-121129             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 1s    kube-proxy       
	  Normal  Starting                 8s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s    kubelet          Node newest-cni-121129 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s    kubelet          Node newest-cni-121129 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s    kubelet          Node newest-cni-121129 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s    node-controller  Node newest-cni-121129 event: Registered Node newest-cni-121129 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 95 0c 3e 92 2e 08 06
	[  +0.052845] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 a5 06 76 2d e3 08 06
	[ +11.354316] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff fa c6 ff 04 55 d6 08 06
	[  +7.101927] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e6 9b 73 27 8c 80 08 06
	[  +0.000350] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 a5 06 76 2d e3 08 06
	[  +6.287191] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 27 2d 28 d6 46 08 06
	[  +0.000293] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa c6 ff 04 55 d6 08 06
	[Oct10 18:19] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 8c 22 f6 6b cf 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e 29 bf 13 20 f9 08 06
	[ +15.511156] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e d6 74 aa 27 d0 08 06
	[  +0.008495] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 af 05 d4 db d1 08 06
	[Oct10 18:20] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 0b 54 33 52 4e 08 06
	[  +0.000597] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 af 05 d4 db d1 08 06
	
	
	==> etcd [fc321f2bf66a1ae3c2373b8b33253124144c7c9bcbf803f768ef90b0baacefdc] <==
	{"level":"warn","ts":"2025-10-10T18:21:50.424540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:50.434282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:50.443836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:50.452822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:50.464952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:50.474937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:50.483759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:50.492799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:50.502041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:50.512217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:50.521881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:50.529772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:50.537867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:50.546615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:50.555305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:50.564910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:50.573106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:50.581294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:50.590966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:50.598965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:50.607176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:50.622562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:50.630016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:50.637248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:50.706946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45180","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:22:01 up  1:04,  0 user,  load average: 5.78, 4.78, 3.04
	Linux newest-cni-121129 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6005efa081ffdb1b32b804ec39eec95ebb5f7ba694f863584a790b2a671003b0] <==
	I1010 18:22:00.451864       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1010 18:22:00.452207       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1010 18:22:00.452373       1 main.go:148] setting mtu 1500 for CNI 
	I1010 18:22:00.452389       1 main.go:178] kindnetd IP family: "ipv4"
	I1010 18:22:00.452412       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-10T18:22:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1010 18:22:00.667252       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1010 18:22:00.667281       1 controller.go:381] "Waiting for informer caches to sync"
	I1010 18:22:00.667300       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1010 18:22:00.667488       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1010 18:22:01.067545       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1010 18:22:01.067618       1 metrics.go:72] Registering metrics
	I1010 18:22:01.068257       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [aab693007927b959a163ffdcd5f1c20995e2e66153ad8dc937a0d5540da8c95e] <==
	E1010 18:21:51.404716       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1010 18:21:51.433147       1 controller.go:667] quota admission added evaluator for: namespaces
	I1010 18:21:51.435357       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1010 18:21:51.436374       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1010 18:21:51.443474       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1010 18:21:51.443556       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1010 18:21:51.609414       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1010 18:21:52.255252       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1010 18:21:52.324022       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1010 18:21:52.324039       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1010 18:21:52.933857       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1010 18:21:52.985626       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1010 18:21:53.040218       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1010 18:21:53.048000       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1010 18:21:53.049469       1 controller.go:667] quota admission added evaluator for: endpoints
	I1010 18:21:53.054211       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1010 18:21:53.743651       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1010 18:21:54.072526       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1010 18:21:54.085707       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1010 18:21:54.096261       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1010 18:21:59.443532       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1010 18:21:59.447623       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1010 18:21:59.743089       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1010 18:21:59.792211       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1010 18:21:59.792216       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [0e2d045322abaac22f4a171749c9585791d4ee8c5c4d74b1fbbd591c4e6513d2] <==
	I1010 18:21:58.738689       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1010 18:21:58.739919       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1010 18:21:58.739939       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1010 18:21:58.739991       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1010 18:21:58.740008       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1010 18:21:58.740039       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1010 18:21:58.740069       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1010 18:21:58.740118       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1010 18:21:58.740132       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1010 18:21:58.740141       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1010 18:21:58.740310       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1010 18:21:58.741487       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1010 18:21:58.741606       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1010 18:21:58.741713       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-121129"
	I1010 18:21:58.741762       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1010 18:21:58.741856       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1010 18:21:58.743640       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1010 18:21:58.745532       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1010 18:21:58.745563       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1010 18:21:58.747711       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1010 18:21:58.749896       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1010 18:21:58.753615       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1010 18:21:58.753716       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1010 18:21:58.759100       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1010 18:21:58.767406       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [d14511781e7f5a11b6859cfc3e501c61bb6e7e51aed67512014331d5b310734c] <==
	I1010 18:22:00.210713       1 server_linux.go:53] "Using iptables proxy"
	I1010 18:22:00.268196       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1010 18:22:00.368637       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1010 18:22:00.368676       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1010 18:22:00.368794       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1010 18:22:00.387173       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1010 18:22:00.387224       1 server_linux.go:132] "Using iptables Proxier"
	I1010 18:22:00.392767       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1010 18:22:00.393171       1 server.go:527] "Version info" version="v1.34.1"
	I1010 18:22:00.393201       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 18:22:00.395797       1 config.go:403] "Starting serviceCIDR config controller"
	I1010 18:22:00.395815       1 config.go:106] "Starting endpoint slice config controller"
	I1010 18:22:00.395823       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1010 18:22:00.395823       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1010 18:22:00.395839       1 config.go:309] "Starting node config controller"
	I1010 18:22:00.395854       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1010 18:22:00.395861       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1010 18:22:00.395803       1 config.go:200] "Starting service config controller"
	I1010 18:22:00.395870       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1010 18:22:00.496965       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1010 18:22:00.496983       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1010 18:22:00.497012       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [a988969f8acf3a1ee86c0cf0bf7cd6075eaf8358de7e714a871a3c38b0d25618] <==
	E1010 18:21:51.306204       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1010 18:21:51.306235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1010 18:21:51.305791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1010 18:21:51.305902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1010 18:21:51.305676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1010 18:21:51.306261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1010 18:21:51.306467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1010 18:21:51.305626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1010 18:21:51.306589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1010 18:21:51.307004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1010 18:21:52.131224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1010 18:21:52.237120       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1010 18:21:52.282717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1010 18:21:52.283899       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1010 18:21:52.411398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1010 18:21:52.430926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1010 18:21:52.441704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1010 18:21:52.443092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1010 18:21:52.447685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1010 18:21:52.455255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1010 18:21:52.540536       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1010 18:21:52.548859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1010 18:21:52.640698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1010 18:21:52.691393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1010 18:21:54.396442       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 10 18:21:54 newest-cni-121129 kubelet[1375]: I1010 18:21:54.270364    1375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a0a13e1303670b5e2ec307208d9ad2ac-kubeconfig\") pod \"kube-controller-manager-newest-cni-121129\" (UID: \"a0a13e1303670b5e2ec307208d9ad2ac\") " pod="kube-system/kube-controller-manager-newest-cni-121129"
	Oct 10 18:21:54 newest-cni-121129 kubelet[1375]: I1010 18:21:54.957462    1375 apiserver.go:52] "Watching apiserver"
	Oct 10 18:21:54 newest-cni-121129 kubelet[1375]: I1010 18:21:54.968274    1375 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 10 18:21:55 newest-cni-121129 kubelet[1375]: I1010 18:21:55.021291    1375 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-121129"
	Oct 10 18:21:55 newest-cni-121129 kubelet[1375]: I1010 18:21:55.021646    1375 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-121129"
	Oct 10 18:21:55 newest-cni-121129 kubelet[1375]: I1010 18:21:55.021738    1375 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-121129"
	Oct 10 18:21:55 newest-cni-121129 kubelet[1375]: E1010 18:21:55.039442    1375 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-121129\" already exists" pod="kube-system/kube-apiserver-newest-cni-121129"
	Oct 10 18:21:55 newest-cni-121129 kubelet[1375]: E1010 18:21:55.039828    1375 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-121129\" already exists" pod="kube-system/kube-scheduler-newest-cni-121129"
	Oct 10 18:21:55 newest-cni-121129 kubelet[1375]: E1010 18:21:55.043115    1375 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-121129\" already exists" pod="kube-system/etcd-newest-cni-121129"
	Oct 10 18:21:55 newest-cni-121129 kubelet[1375]: I1010 18:21:55.080214    1375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-121129" podStartSLOduration=1.080187888 podStartE2EDuration="1.080187888s" podCreationTimestamp="2025-10-10 18:21:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:21:55.064691975 +0000 UTC m=+1.186444774" watchObservedRunningTime="2025-10-10 18:21:55.080187888 +0000 UTC m=+1.201940679"
	Oct 10 18:21:55 newest-cni-121129 kubelet[1375]: I1010 18:21:55.096044    1375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-121129" podStartSLOduration=1.096019651 podStartE2EDuration="1.096019651s" podCreationTimestamp="2025-10-10 18:21:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:21:55.080613492 +0000 UTC m=+1.202366277" watchObservedRunningTime="2025-10-10 18:21:55.096019651 +0000 UTC m=+1.217772445"
	Oct 10 18:21:55 newest-cni-121129 kubelet[1375]: I1010 18:21:55.097220    1375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-121129" podStartSLOduration=1.097203044 podStartE2EDuration="1.097203044s" podCreationTimestamp="2025-10-10 18:21:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:21:55.09684407 +0000 UTC m=+1.218596859" watchObservedRunningTime="2025-10-10 18:21:55.097203044 +0000 UTC m=+1.218955835"
	Oct 10 18:21:55 newest-cni-121129 kubelet[1375]: I1010 18:21:55.125002    1375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-121129" podStartSLOduration=1.124964189 podStartE2EDuration="1.124964189s" podCreationTimestamp="2025-10-10 18:21:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:21:55.110256658 +0000 UTC m=+1.232009463" watchObservedRunningTime="2025-10-10 18:21:55.124964189 +0000 UTC m=+1.246716965"
	Oct 10 18:21:58 newest-cni-121129 kubelet[1375]: I1010 18:21:58.745958    1375 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 10 18:21:58 newest-cni-121129 kubelet[1375]: I1010 18:21:58.746589    1375 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 10 18:21:59 newest-cni-121129 kubelet[1375]: I1010 18:21:59.910982    1375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j98f7\" (UniqueName: \"kubernetes.io/projected/82e9ec15-44c0-4bfd-8b16-3862f7bb01a6-kube-api-access-j98f7\") pod \"kube-proxy-sw4cj\" (UID: \"82e9ec15-44c0-4bfd-8b16-3862f7bb01a6\") " pod="kube-system/kube-proxy-sw4cj"
	Oct 10 18:21:59 newest-cni-121129 kubelet[1375]: I1010 18:21:59.911043    1375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njvtc\" (UniqueName: \"kubernetes.io/projected/22d3f2b7-d65b-4c8e-a02f-58ead02d9794-kube-api-access-njvtc\") pod \"kindnet-9ml5n\" (UID: \"22d3f2b7-d65b-4c8e-a02f-58ead02d9794\") " pod="kube-system/kindnet-9ml5n"
	Oct 10 18:21:59 newest-cni-121129 kubelet[1375]: I1010 18:21:59.911101    1375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82e9ec15-44c0-4bfd-8b16-3862f7bb01a6-lib-modules\") pod \"kube-proxy-sw4cj\" (UID: \"82e9ec15-44c0-4bfd-8b16-3862f7bb01a6\") " pod="kube-system/kube-proxy-sw4cj"
	Oct 10 18:21:59 newest-cni-121129 kubelet[1375]: I1010 18:21:59.911124    1375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/22d3f2b7-d65b-4c8e-a02f-58ead02d9794-cni-cfg\") pod \"kindnet-9ml5n\" (UID: \"22d3f2b7-d65b-4c8e-a02f-58ead02d9794\") " pod="kube-system/kindnet-9ml5n"
	Oct 10 18:21:59 newest-cni-121129 kubelet[1375]: I1010 18:21:59.911147    1375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/82e9ec15-44c0-4bfd-8b16-3862f7bb01a6-kube-proxy\") pod \"kube-proxy-sw4cj\" (UID: \"82e9ec15-44c0-4bfd-8b16-3862f7bb01a6\") " pod="kube-system/kube-proxy-sw4cj"
	Oct 10 18:21:59 newest-cni-121129 kubelet[1375]: I1010 18:21:59.911167    1375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82e9ec15-44c0-4bfd-8b16-3862f7bb01a6-xtables-lock\") pod \"kube-proxy-sw4cj\" (UID: \"82e9ec15-44c0-4bfd-8b16-3862f7bb01a6\") " pod="kube-system/kube-proxy-sw4cj"
	Oct 10 18:21:59 newest-cni-121129 kubelet[1375]: I1010 18:21:59.911196    1375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22d3f2b7-d65b-4c8e-a02f-58ead02d9794-lib-modules\") pod \"kindnet-9ml5n\" (UID: \"22d3f2b7-d65b-4c8e-a02f-58ead02d9794\") " pod="kube-system/kindnet-9ml5n"
	Oct 10 18:21:59 newest-cni-121129 kubelet[1375]: I1010 18:21:59.911227    1375 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22d3f2b7-d65b-4c8e-a02f-58ead02d9794-xtables-lock\") pod \"kindnet-9ml5n\" (UID: \"22d3f2b7-d65b-4c8e-a02f-58ead02d9794\") " pod="kube-system/kindnet-9ml5n"
	Oct 10 18:22:01 newest-cni-121129 kubelet[1375]: I1010 18:22:01.056447    1375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-9ml5n" podStartSLOduration=2.056422762 podStartE2EDuration="2.056422762s" podCreationTimestamp="2025-10-10 18:21:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:22:01.056341824 +0000 UTC m=+7.178094619" watchObservedRunningTime="2025-10-10 18:22:01.056422762 +0000 UTC m=+7.178175556"
	Oct 10 18:22:01 newest-cni-121129 kubelet[1375]: I1010 18:22:01.069101    1375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sw4cj" podStartSLOduration=2.069076224 podStartE2EDuration="2.069076224s" podCreationTimestamp="2025-10-10 18:21:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-10 18:22:01.068878117 +0000 UTC m=+7.190630910" watchObservedRunningTime="2025-10-10 18:22:01.069076224 +0000 UTC m=+7.190829018"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-121129 -n newest-cni-121129
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-121129 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-bbxwj storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-121129 describe pod coredns-66bc5c9577-bbxwj storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-121129 describe pod coredns-66bc5c9577-bbxwj storage-provisioner: exit status 1 (61.451311ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-bbxwj" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-121129 describe pod coredns-66bc5c9577-bbxwj storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-121129 --alsologtostderr -v=1
E1010 18:22:16.067644    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/kindnet-078032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-121129 --alsologtostderr -v=1: exit status 80 (1.949375331s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-121129 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 18:22:15.878200  337518 out.go:360] Setting OutFile to fd 1 ...
	I1010 18:22:15.878440  337518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:22:15.878450  337518 out.go:374] Setting ErrFile to fd 2...
	I1010 18:22:15.878454  337518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:22:15.878634  337518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 18:22:15.878867  337518 out.go:368] Setting JSON to false
	I1010 18:22:15.878910  337518 mustload.go:65] Loading cluster: newest-cni-121129
	I1010 18:22:15.879362  337518 config.go:182] Loaded profile config "newest-cni-121129": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:22:15.879843  337518 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:22:15.901240  337518 host.go:66] Checking if "newest-cni-121129" exists ...
	I1010 18:22:15.901576  337518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:22:15.969993  337518 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-10 18:22:15.958590388 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:22:15.970861  337518 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-121129 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1010 18:22:15.972881  337518 out.go:179] * Pausing node newest-cni-121129 ... 
	I1010 18:22:15.974373  337518 host.go:66] Checking if "newest-cni-121129" exists ...
	I1010 18:22:15.974702  337518 ssh_runner.go:195] Run: systemctl --version
	I1010 18:22:15.974750  337518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:15.993297  337518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:22:16.093226  337518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:22:16.105985  337518 pause.go:52] kubelet running: true
	I1010 18:22:16.106038  337518 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1010 18:22:16.242074  337518 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1010 18:22:16.242154  337518 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1010 18:22:16.307430  337518 cri.go:89] found id: "54fc410f4ca645d11135ea775c4765a38d35954eaee3c37750a64a1cf07f2ee7"
	I1010 18:22:16.307454  337518 cri.go:89] found id: "c6fa99fb85d5287d72da51a3db60987a29f6798385a580361fb3660a26987be3"
	I1010 18:22:16.307460  337518 cri.go:89] found id: "7f03778cf9929180d97b99c5a7dabc1b07cab95c28d238404c4b3cdda1350b21"
	I1010 18:22:16.307465  337518 cri.go:89] found id: "bf112ce4d768b53d1e90f30761c3ce870d54e55a9c7241326c2c1e377046fb0b"
	I1010 18:22:16.307469  337518 cri.go:89] found id: "ab61bce748bfcc69bd3fc766155054b877fa6b8c7695ee04c693a7820d3e6b33"
	I1010 18:22:16.307482  337518 cri.go:89] found id: "ef0f16a1ff912c99555175b679ca7c2499386f3f7c4b4c9a7270a180e8c15937"
	I1010 18:22:16.307487  337518 cri.go:89] found id: ""
	I1010 18:22:16.307532  337518 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 18:22:16.319725  337518 retry.go:31] will retry after 129.59908ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:22:16Z" level=error msg="open /run/runc: no such file or directory"
	I1010 18:22:16.450102  337518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:22:16.463695  337518 pause.go:52] kubelet running: false
	I1010 18:22:16.463742  337518 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1010 18:22:16.581270  337518 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1010 18:22:16.581352  337518 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1010 18:22:16.648902  337518 cri.go:89] found id: "54fc410f4ca645d11135ea775c4765a38d35954eaee3c37750a64a1cf07f2ee7"
	I1010 18:22:16.648924  337518 cri.go:89] found id: "c6fa99fb85d5287d72da51a3db60987a29f6798385a580361fb3660a26987be3"
	I1010 18:22:16.648928  337518 cri.go:89] found id: "7f03778cf9929180d97b99c5a7dabc1b07cab95c28d238404c4b3cdda1350b21"
	I1010 18:22:16.648931  337518 cri.go:89] found id: "bf112ce4d768b53d1e90f30761c3ce870d54e55a9c7241326c2c1e377046fb0b"
	I1010 18:22:16.648934  337518 cri.go:89] found id: "ab61bce748bfcc69bd3fc766155054b877fa6b8c7695ee04c693a7820d3e6b33"
	I1010 18:22:16.648937  337518 cri.go:89] found id: "ef0f16a1ff912c99555175b679ca7c2499386f3f7c4b4c9a7270a180e8c15937"
	I1010 18:22:16.648940  337518 cri.go:89] found id: ""
	I1010 18:22:16.648997  337518 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 18:22:16.662660  337518 retry.go:31] will retry after 288.371537ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:22:16Z" level=error msg="open /run/runc: no such file or directory"
	I1010 18:22:16.952256  337518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:22:16.966449  337518 pause.go:52] kubelet running: false
	I1010 18:22:16.966520  337518 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1010 18:22:17.079945  337518 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1010 18:22:17.080029  337518 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1010 18:22:17.156954  337518 cri.go:89] found id: "54fc410f4ca645d11135ea775c4765a38d35954eaee3c37750a64a1cf07f2ee7"
	I1010 18:22:17.156986  337518 cri.go:89] found id: "c6fa99fb85d5287d72da51a3db60987a29f6798385a580361fb3660a26987be3"
	I1010 18:22:17.156992  337518 cri.go:89] found id: "7f03778cf9929180d97b99c5a7dabc1b07cab95c28d238404c4b3cdda1350b21"
	I1010 18:22:17.156997  337518 cri.go:89] found id: "bf112ce4d768b53d1e90f30761c3ce870d54e55a9c7241326c2c1e377046fb0b"
	I1010 18:22:17.157002  337518 cri.go:89] found id: "ab61bce748bfcc69bd3fc766155054b877fa6b8c7695ee04c693a7820d3e6b33"
	I1010 18:22:17.157007  337518 cri.go:89] found id: "ef0f16a1ff912c99555175b679ca7c2499386f3f7c4b4c9a7270a180e8c15937"
	I1010 18:22:17.157010  337518 cri.go:89] found id: ""
	I1010 18:22:17.157065  337518 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 18:22:17.169619  337518 retry.go:31] will retry after 385.917949ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:22:17Z" level=error msg="open /run/runc: no such file or directory"
	I1010 18:22:17.556246  337518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:22:17.570114  337518 pause.go:52] kubelet running: false
	I1010 18:22:17.570167  337518 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1010 18:22:17.690569  337518 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1010 18:22:17.690680  337518 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1010 18:22:17.758297  337518 cri.go:89] found id: "54fc410f4ca645d11135ea775c4765a38d35954eaee3c37750a64a1cf07f2ee7"
	I1010 18:22:17.758323  337518 cri.go:89] found id: "c6fa99fb85d5287d72da51a3db60987a29f6798385a580361fb3660a26987be3"
	I1010 18:22:17.758327  337518 cri.go:89] found id: "7f03778cf9929180d97b99c5a7dabc1b07cab95c28d238404c4b3cdda1350b21"
	I1010 18:22:17.758330  337518 cri.go:89] found id: "bf112ce4d768b53d1e90f30761c3ce870d54e55a9c7241326c2c1e377046fb0b"
	I1010 18:22:17.758333  337518 cri.go:89] found id: "ab61bce748bfcc69bd3fc766155054b877fa6b8c7695ee04c693a7820d3e6b33"
	I1010 18:22:17.758336  337518 cri.go:89] found id: "ef0f16a1ff912c99555175b679ca7c2499386f3f7c4b4c9a7270a180e8c15937"
	I1010 18:22:17.758338  337518 cri.go:89] found id: ""
	I1010 18:22:17.758382  337518 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 18:22:17.772192  337518 out.go:203] 
	W1010 18:22:17.773271  337518 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:22:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:22:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1010 18:22:17.773290  337518 out.go:285] * 
	* 
	W1010 18:22:17.777261  337518 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 18:22:17.778323  337518 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-121129 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-121129
helpers_test.go:243: (dbg) docker inspect newest-cni-121129:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "44f7c2aef6cca1948267e8f9a581073b0895bb8f16e8949e0db91550afe6a3a3",
	        "Created": "2025-10-10T18:21:36.293708252Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 335711,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-10T18:22:05.531946344Z",
	            "FinishedAt": "2025-10-10T18:22:04.735261768Z"
	        },
	        "Image": "sha256:84da1fc78d37190122f56c520913b0bfc454516bc5fdbdc209e2a5258afce8c3",
	        "ResolvConfPath": "/var/lib/docker/containers/44f7c2aef6cca1948267e8f9a581073b0895bb8f16e8949e0db91550afe6a3a3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/44f7c2aef6cca1948267e8f9a581073b0895bb8f16e8949e0db91550afe6a3a3/hostname",
	        "HostsPath": "/var/lib/docker/containers/44f7c2aef6cca1948267e8f9a581073b0895bb8f16e8949e0db91550afe6a3a3/hosts",
	        "LogPath": "/var/lib/docker/containers/44f7c2aef6cca1948267e8f9a581073b0895bb8f16e8949e0db91550afe6a3a3/44f7c2aef6cca1948267e8f9a581073b0895bb8f16e8949e0db91550afe6a3a3-json.log",
	        "Name": "/newest-cni-121129",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-121129:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-121129",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "44f7c2aef6cca1948267e8f9a581073b0895bb8f16e8949e0db91550afe6a3a3",
	                "LowerDir": "/var/lib/docker/overlay2/bf33ff5e2644ca9451feb6b194bd1b9cfcbf6459017f40bc6397e87fbe55a746-init/diff:/var/lib/docker/overlay2/9995a0af7efc4d83e8e62526a6cf13ffc5df3bab5cee59077c863040f7e3e58d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bf33ff5e2644ca9451feb6b194bd1b9cfcbf6459017f40bc6397e87fbe55a746/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bf33ff5e2644ca9451feb6b194bd1b9cfcbf6459017f40bc6397e87fbe55a746/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bf33ff5e2644ca9451feb6b194bd1b9cfcbf6459017f40bc6397e87fbe55a746/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-121129",
	                "Source": "/var/lib/docker/volumes/newest-cni-121129/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-121129",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-121129",
	                "name.minikube.sigs.k8s.io": "newest-cni-121129",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fd56a25fe7e11b70e4d92f61dd44312b28506ce3af2f371d45f8f1d2a04970f5",
	            "SandboxKey": "/var/run/docker/netns/fd56a25fe7e1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-121129": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:e6:da:24:33:14",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cd26f66f7d0715bf666ca6e5dc6891adf394cc9a58fe404ddf68c49d82b6f4c2",
	                    "EndpointID": "0d9d37391289d68345f704bec905cd529100684288cd1282575d1b546a235959",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-121129",
	                        "44f7c2aef6cc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-121129 -n newest-cni-121129
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-121129 -n newest-cni-121129: exit status 2 (318.527474ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-121129 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-556024 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p no-preload-556024 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:21 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-821769 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-821769 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ image   │ old-k8s-version-141193 image list --format=json                                                                                                                                                                                               │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ pause   │ -p old-k8s-version-141193 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ delete  │ -p old-k8s-version-141193                                                                                                                                                                                                                     │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ delete  │ -p old-k8s-version-141193                                                                                                                                                                                                                     │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ start   │ -p newest-cni-121129 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-821769 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ start   │ -p default-k8s-diff-port-821769 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ image   │ no-preload-556024 image list --format=json                                                                                                                                                                                                    │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ pause   │ -p no-preload-556024 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ image   │ embed-certs-472518 image list --format=json                                                                                                                                                                                                   │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ pause   │ -p embed-certs-472518 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ delete  │ -p no-preload-556024                                                                                                                                                                                                                          │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ delete  │ -p no-preload-556024                                                                                                                                                                                                                          │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ delete  │ -p embed-certs-472518                                                                                                                                                                                                                         │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ delete  │ -p embed-certs-472518                                                                                                                                                                                                                         │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ addons  │ enable metrics-server -p newest-cni-121129 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:22 UTC │                     │
	│ stop    │ -p newest-cni-121129 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:22 UTC │ 10 Oct 25 18:22 UTC │
	│ addons  │ enable dashboard -p newest-cni-121129 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:22 UTC │ 10 Oct 25 18:22 UTC │
	│ start   │ -p newest-cni-121129 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:22 UTC │ 10 Oct 25 18:22 UTC │
	│ image   │ newest-cni-121129 image list --format=json                                                                                                                                                                                                    │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:22 UTC │ 10 Oct 25 18:22 UTC │
	│ pause   │ -p newest-cni-121129 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/10 18:22:05
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 18:22:05.290569  335513 out.go:360] Setting OutFile to fd 1 ...
	I1010 18:22:05.290861  335513 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:22:05.290870  335513 out.go:374] Setting ErrFile to fd 2...
	I1010 18:22:05.290877  335513 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:22:05.291147  335513 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 18:22:05.291697  335513 out.go:368] Setting JSON to false
	I1010 18:22:05.292906  335513 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3865,"bootTime":1760116660,"procs":265,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 18:22:05.293008  335513 start.go:141] virtualization: kvm guest
	I1010 18:22:05.294961  335513 out.go:179] * [newest-cni-121129] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1010 18:22:05.296259  335513 out.go:179]   - MINIKUBE_LOCATION=21724
	I1010 18:22:05.296288  335513 notify.go:220] Checking for updates...
	I1010 18:22:05.298639  335513 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 18:22:05.299676  335513 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:22:05.300690  335513 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-5815/.minikube
	I1010 18:22:05.301797  335513 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 18:22:05.302929  335513 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 18:22:05.307755  335513 config.go:182] Loaded profile config "newest-cni-121129": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:22:05.308318  335513 driver.go:421] Setting default libvirt URI to qemu:///system
	I1010 18:22:05.332954  335513 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1010 18:22:05.333071  335513 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:22:05.393251  335513 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-10 18:22:05.383186457 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:22:05.393422  335513 docker.go:318] overlay module found
	I1010 18:22:05.395166  335513 out.go:179] * Using the docker driver based on existing profile
	I1010 18:22:05.396306  335513 start.go:305] selected driver: docker
	I1010 18:22:05.396321  335513 start.go:925] validating driver "docker" against &{Name:newest-cni-121129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:22:05.396438  335513 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 18:22:05.397122  335513 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:22:05.458840  335513 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-10 18:22:05.448230468 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:22:05.459176  335513 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1010 18:22:05.459216  335513 cni.go:84] Creating CNI manager for ""
	I1010 18:22:05.459260  335513 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:22:05.459302  335513 start.go:349] cluster config:
	{Name:newest-cni-121129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:22:05.461891  335513 out.go:179] * Starting "newest-cni-121129" primary control-plane node in "newest-cni-121129" cluster
	I1010 18:22:05.462953  335513 cache.go:123] Beginning downloading kic base image for docker with crio
	I1010 18:22:05.464080  335513 out.go:179] * Pulling base image v0.0.48-1760103811-21724 ...
	I1010 18:22:05.465182  335513 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:22:05.465219  335513 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1010 18:22:05.465239  335513 cache.go:58] Caching tarball of preloaded images
	I1010 18:22:05.465271  335513 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon
	I1010 18:22:05.465353  335513 preload.go:233] Found /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 18:22:05.465368  335513 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1010 18:22:05.465464  335513 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/config.json ...
	I1010 18:22:05.486563  335513 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon, skipping pull
	I1010 18:22:05.486586  335513 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 exists in daemon, skipping load
	I1010 18:22:05.486605  335513 cache.go:232] Successfully downloaded all kic artifacts
	I1010 18:22:05.486632  335513 start.go:360] acquireMachinesLock for newest-cni-121129: {Name:mkd067d67013b78a79cc31e2d50fcfd69790fc6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:22:05.486702  335513 start.go:364] duration metric: took 48.282µs to acquireMachinesLock for "newest-cni-121129"
	I1010 18:22:05.486725  335513 start.go:96] Skipping create...Using existing machine configuration
	I1010 18:22:05.486733  335513 fix.go:54] fixHost starting: 
	I1010 18:22:05.486937  335513 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:22:05.505160  335513 fix.go:112] recreateIfNeeded on newest-cni-121129: state=Stopped err=<nil>
	W1010 18:22:05.505189  335513 fix.go:138] unexpected machine state, will restart: <nil>
	W1010 18:22:02.659629  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	W1010 18:22:04.660067  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	I1010 18:22:05.506971  335513 out.go:252] * Restarting existing docker container for "newest-cni-121129" ...
	I1010 18:22:05.507082  335513 cli_runner.go:164] Run: docker start newest-cni-121129
	I1010 18:22:05.744340  335513 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:22:05.763128  335513 kic.go:430] container "newest-cni-121129" state is running.
	I1010 18:22:05.763484  335513 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-121129
	I1010 18:22:05.782418  335513 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/config.json ...
	I1010 18:22:05.782704  335513 machine.go:93] provisionDockerMachine start ...
	I1010 18:22:05.782787  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:05.801168  335513 main.go:141] libmachine: Using SSH client type: native
	I1010 18:22:05.801379  335513 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1010 18:22:05.801392  335513 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 18:22:05.802022  335513 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53092->127.0.0.1:33133: read: connection reset by peer
	I1010 18:22:08.938319  335513 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-121129
	
	I1010 18:22:08.938347  335513 ubuntu.go:182] provisioning hostname "newest-cni-121129"
	I1010 18:22:08.938432  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:08.956809  335513 main.go:141] libmachine: Using SSH client type: native
	I1010 18:22:08.957009  335513 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1010 18:22:08.957024  335513 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-121129 && echo "newest-cni-121129" | sudo tee /etc/hostname
	I1010 18:22:09.102637  335513 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-121129
	
	I1010 18:22:09.102708  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:09.121495  335513 main.go:141] libmachine: Using SSH client type: native
	I1010 18:22:09.121708  335513 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1010 18:22:09.121725  335513 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-121129' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-121129/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-121129' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:22:09.255802  335513 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:22:09.255838  335513 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-5815/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-5815/.minikube}
	I1010 18:22:09.255877  335513 ubuntu.go:190] setting up certificates
	I1010 18:22:09.255893  335513 provision.go:84] configureAuth start
	I1010 18:22:09.255959  335513 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-121129
	I1010 18:22:09.273223  335513 provision.go:143] copyHostCerts
	I1010 18:22:09.273280  335513 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem, removing ...
	I1010 18:22:09.273293  335513 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem
	I1010 18:22:09.273359  335513 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem (1082 bytes)
	I1010 18:22:09.273459  335513 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem, removing ...
	I1010 18:22:09.273468  335513 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem
	I1010 18:22:09.273494  335513 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem (1123 bytes)
	I1010 18:22:09.273561  335513 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem, removing ...
	I1010 18:22:09.273568  335513 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem
	I1010 18:22:09.273591  335513 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem (1675 bytes)
	I1010 18:22:09.273652  335513 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem org=jenkins.newest-cni-121129 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-121129]
	I1010 18:22:09.612120  335513 provision.go:177] copyRemoteCerts
	I1010 18:22:09.612187  335513 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:22:09.612221  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:09.629812  335513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:22:09.726962  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1010 18:22:09.746555  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1010 18:22:09.766845  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 18:22:09.786986  335513 provision.go:87] duration metric: took 531.066176ms to configureAuth
	I1010 18:22:09.787015  335513 ubuntu.go:206] setting minikube options for container-runtime
	I1010 18:22:09.787209  335513 config.go:182] Loaded profile config "newest-cni-121129": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:22:09.787337  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:09.805200  335513 main.go:141] libmachine: Using SSH client type: native
	I1010 18:22:09.805389  335513 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1010 18:22:09.805406  335513 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:22:10.098222  335513 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:22:10.098249  335513 machine.go:96] duration metric: took 4.31552528s to provisionDockerMachine
	I1010 18:22:10.098261  335513 start.go:293] postStartSetup for "newest-cni-121129" (driver="docker")
	I1010 18:22:10.098276  335513 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:22:10.098357  335513 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:22:10.098407  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:10.115908  335513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:22:10.213790  335513 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:22:10.217524  335513 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1010 18:22:10.217553  335513 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1010 18:22:10.217567  335513 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/addons for local assets ...
	I1010 18:22:10.217636  335513 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/files for local assets ...
	I1010 18:22:10.217740  335513 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem -> 93542.pem in /etc/ssl/certs
	I1010 18:22:10.217864  335513 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:22:10.226684  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:22:10.248076  335513 start.go:296] duration metric: took 149.799111ms for postStartSetup
	I1010 18:22:10.248178  335513 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1010 18:22:10.248226  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:10.266300  335513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:22:10.360213  335513 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1010 18:22:10.364797  335513 fix.go:56] duration metric: took 4.878059137s for fixHost
	I1010 18:22:10.364821  335513 start.go:83] releasing machines lock for "newest-cni-121129", held for 4.878105914s
	I1010 18:22:10.364878  335513 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-121129
	I1010 18:22:10.383110  335513 ssh_runner.go:195] Run: cat /version.json
	I1010 18:22:10.383169  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:10.383208  335513 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:22:10.383290  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:10.401694  335513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:22:10.402069  335513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:22:10.549004  335513 ssh_runner.go:195] Run: systemctl --version
	I1010 18:22:10.555440  335513 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:22:10.589903  335513 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:22:10.594487  335513 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:22:10.594552  335513 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:22:10.603402  335513 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1010 18:22:10.603427  335513 start.go:495] detecting cgroup driver to use...
	I1010 18:22:10.603462  335513 detect.go:190] detected "systemd" cgroup driver on host os
	I1010 18:22:10.603516  335513 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:22:10.617988  335513 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:22:10.630757  335513 docker.go:218] disabling cri-docker service (if available) ...
	I1010 18:22:10.630811  335513 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:22:10.645086  335513 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:22:10.659116  335513 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:22:10.739783  335513 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:22:10.821838  335513 docker.go:234] disabling docker service ...
	I1010 18:22:10.821898  335513 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:22:10.836530  335513 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:22:10.849438  335513 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:22:10.926810  335513 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:22:11.009431  335513 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:22:11.022257  335513 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:22:11.037720  335513 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1010 18:22:11.037792  335513 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:22:11.047811  335513 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1010 18:22:11.047875  335513 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:22:11.057692  335513 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:22:11.067884  335513 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:22:11.077525  335513 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:22:11.087002  335513 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:22:11.096828  335513 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:22:11.106020  335513 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:22:11.115595  335513 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:22:11.123688  335513 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:22:11.131885  335513 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:22:11.209940  335513 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:22:11.350816  335513 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:22:11.350877  335513 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:22:11.355096  335513 start.go:563] Will wait 60s for crictl version
	I1010 18:22:11.355145  335513 ssh_runner.go:195] Run: which crictl
	I1010 18:22:11.358770  335513 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1010 18:22:11.384561  335513 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1010 18:22:11.384639  335513 ssh_runner.go:195] Run: crio --version
	I1010 18:22:11.411320  335513 ssh_runner.go:195] Run: crio --version
	I1010 18:22:11.440045  335513 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1010 18:22:06.661425  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	W1010 18:22:09.158000  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	W1010 18:22:11.160422  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	I1010 18:22:11.441103  335513 cli_runner.go:164] Run: docker network inspect newest-cni-121129 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 18:22:11.458538  335513 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1010 18:22:11.462704  335513 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:22:11.475134  335513 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1010 18:22:11.476017  335513 kubeadm.go:883] updating cluster {Name:newest-cni-121129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 18:22:11.476150  335513 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:22:11.476202  335513 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:22:11.507304  335513 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:22:11.507323  335513 crio.go:433] Images already preloaded, skipping extraction
	I1010 18:22:11.507363  335513 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:22:11.533243  335513 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:22:11.533265  335513 cache_images.go:85] Images are preloaded, skipping loading
	I1010 18:22:11.533272  335513 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1010 18:22:11.533353  335513 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-121129 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:22:11.533416  335513 ssh_runner.go:195] Run: crio config
	I1010 18:22:11.578761  335513 cni.go:84] Creating CNI manager for ""
	I1010 18:22:11.578789  335513 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:22:11.578804  335513 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1010 18:22:11.578824  335513 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-121129 NodeName:newest-cni-121129 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 18:22:11.578929  335513 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-121129"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 18:22:11.578984  335513 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1010 18:22:11.587839  335513 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 18:22:11.587894  335513 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 18:22:11.596414  335513 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1010 18:22:11.610238  335513 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:22:11.623960  335513 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1010 18:22:11.637763  335513 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1010 18:22:11.641378  335513 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:22:11.652228  335513 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:22:11.733285  335513 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:22:11.757177  335513 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129 for IP: 192.168.85.2
	I1010 18:22:11.757199  335513 certs.go:195] generating shared ca certs ...
	I1010 18:22:11.757219  335513 certs.go:227] acquiring lock for ca certs: {Name:mkd2ebf34e0d6ec3a7809bed8325fdc7fe2fcc31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:22:11.757370  335513 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key
	I1010 18:22:11.757429  335513 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key
	I1010 18:22:11.757441  335513 certs.go:257] generating profile certs ...
	I1010 18:22:11.757572  335513 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.key
	I1010 18:22:11.757653  335513 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key.89f266b7
	I1010 18:22:11.757703  335513 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.key
	I1010 18:22:11.757835  335513 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem (1338 bytes)
	W1010 18:22:11.757872  335513 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354_empty.pem, impossibly tiny 0 bytes
	I1010 18:22:11.757885  335513 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem (1675 bytes)
	I1010 18:22:11.757915  335513 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem (1082 bytes)
	I1010 18:22:11.757954  335513 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:22:11.757981  335513 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem (1675 bytes)
	I1010 18:22:11.758033  335513 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:22:11.758775  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:22:11.778857  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:22:11.801208  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:22:11.824760  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1010 18:22:11.851378  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1010 18:22:11.870850  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 18:22:11.889951  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:22:11.908879  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 18:22:11.928343  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /usr/share/ca-certificates/93542.pem (1708 bytes)
	I1010 18:22:11.948375  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:22:11.969276  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem --> /usr/share/ca-certificates/9354.pem (1338 bytes)
	I1010 18:22:11.988998  335513 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 18:22:12.003247  335513 ssh_runner.go:195] Run: openssl version
	I1010 18:22:12.009554  335513 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:22:12.018800  335513 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:22:12.022724  335513 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:22:12.022777  335513 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:22:12.057604  335513 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:22:12.067287  335513 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9354.pem && ln -fs /usr/share/ca-certificates/9354.pem /etc/ssl/certs/9354.pem"
	I1010 18:22:12.076762  335513 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9354.pem
	I1010 18:22:12.080550  335513 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 17:36 /usr/share/ca-certificates/9354.pem
	I1010 18:22:12.080594  335513 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9354.pem
	I1010 18:22:12.114518  335513 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9354.pem /etc/ssl/certs/51391683.0"
	I1010 18:22:12.123583  335513 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93542.pem && ln -fs /usr/share/ca-certificates/93542.pem /etc/ssl/certs/93542.pem"
	I1010 18:22:12.132960  335513 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93542.pem
	I1010 18:22:12.137033  335513 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 17:36 /usr/share/ca-certificates/93542.pem
	I1010 18:22:12.137103  335513 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93542.pem
	I1010 18:22:12.172587  335513 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93542.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:22:12.181976  335513 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:22:12.185849  335513 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 18:22:12.220072  335513 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 18:22:12.255822  335513 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 18:22:12.300141  335513 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 18:22:12.343441  335513 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 18:22:12.393734  335513 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 18:22:12.454003  335513 kubeadm.go:400] StartCluster: {Name:newest-cni-121129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:22:12.454110  335513 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 18:22:12.454196  335513 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 18:22:12.487305  335513 cri.go:89] found id: "7f03778cf9929180d97b99c5a7dabc1b07cab95c28d238404c4b3cdda1350b21"
	I1010 18:22:12.487332  335513 cri.go:89] found id: "bf112ce4d768b53d1e90f30761c3ce870d54e55a9c7241326c2c1e377046fb0b"
	I1010 18:22:12.487338  335513 cri.go:89] found id: "ab61bce748bfcc69bd3fc766155054b877fa6b8c7695ee04c693a7820d3e6b33"
	I1010 18:22:12.487343  335513 cri.go:89] found id: "ef0f16a1ff912c99555175b679ca7c2499386f3f7c4b4c9a7270a180e8c15937"
	I1010 18:22:12.487347  335513 cri.go:89] found id: ""
	I1010 18:22:12.487394  335513 ssh_runner.go:195] Run: sudo runc list -f json
	W1010 18:22:12.500489  335513 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:22:12Z" level=error msg="open /run/runc: no such file or directory"
	I1010 18:22:12.500556  335513 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 18:22:12.509425  335513 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1010 18:22:12.509447  335513 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1010 18:22:12.509493  335513 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 18:22:12.518026  335513 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 18:22:12.518736  335513 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-121129" does not appear in /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:22:12.519045  335513 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-5815/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-121129" cluster setting kubeconfig missing "newest-cni-121129" context setting]
	I1010 18:22:12.519593  335513 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:22:12.520854  335513 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 18:22:12.530013  335513 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1010 18:22:12.530074  335513 kubeadm.go:601] duration metric: took 20.594831ms to restartPrimaryControlPlane
	I1010 18:22:12.530092  335513 kubeadm.go:402] duration metric: took 76.095724ms to StartCluster
	I1010 18:22:12.530115  335513 settings.go:142] acquiring lock: {Name:mk32701f7c6313a55b8740f0862889585a36e8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:22:12.530186  335513 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:22:12.530994  335513 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:22:12.531256  335513 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:22:12.531320  335513 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 18:22:12.531440  335513 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-121129"
	I1010 18:22:12.531451  335513 addons.go:69] Setting dashboard=true in profile "newest-cni-121129"
	I1010 18:22:12.531464  335513 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-121129"
	I1010 18:22:12.531470  335513 addons.go:238] Setting addon dashboard=true in "newest-cni-121129"
	W1010 18:22:12.531473  335513 addons.go:247] addon storage-provisioner should already be in state true
	W1010 18:22:12.531478  335513 addons.go:247] addon dashboard should already be in state true
	I1010 18:22:12.531482  335513 config.go:182] Loaded profile config "newest-cni-121129": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:22:12.531493  335513 addons.go:69] Setting default-storageclass=true in profile "newest-cni-121129"
	I1010 18:22:12.531516  335513 host.go:66] Checking if "newest-cni-121129" exists ...
	I1010 18:22:12.531531  335513 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-121129"
	I1010 18:22:12.531504  335513 host.go:66] Checking if "newest-cni-121129" exists ...
	I1010 18:22:12.531869  335513 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:22:12.532071  335513 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:22:12.532071  335513 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:22:12.535048  335513 out.go:179] * Verifying Kubernetes components...
	I1010 18:22:12.536132  335513 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:22:12.558523  335513 addons.go:238] Setting addon default-storageclass=true in "newest-cni-121129"
	W1010 18:22:12.558549  335513 addons.go:247] addon default-storageclass should already be in state true
	I1010 18:22:12.558578  335513 host.go:66] Checking if "newest-cni-121129" exists ...
	I1010 18:22:12.558631  335513 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1010 18:22:12.559047  335513 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:22:12.559747  335513 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:22:12.560780  335513 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1010 18:22:12.560840  335513 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:22:12.560860  335513 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 18:22:12.560910  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:12.565594  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1010 18:22:12.565614  335513 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1010 18:22:12.565676  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:12.591733  335513 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 18:22:12.591757  335513 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 18:22:12.591901  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:12.596614  335513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:22:12.597384  335513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:22:12.615727  335513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:22:12.677916  335513 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:22:12.692091  335513 api_server.go:52] waiting for apiserver process to appear ...
	I1010 18:22:12.692167  335513 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 18:22:12.705996  335513 api_server.go:72] duration metric: took 174.708821ms to wait for apiserver process to appear ...
	I1010 18:22:12.706031  335513 api_server.go:88] waiting for apiserver healthz status ...
	I1010 18:22:12.706074  335513 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1010 18:22:12.762093  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1010 18:22:12.762118  335513 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1010 18:22:12.763137  335513 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:22:12.775071  335513 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 18:22:12.780905  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1010 18:22:12.780927  335513 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1010 18:22:12.802455  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1010 18:22:12.802487  335513 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1010 18:22:12.823607  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1010 18:22:12.823636  335513 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1010 18:22:12.839919  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1010 18:22:12.839944  335513 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1010 18:22:12.856483  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1010 18:22:12.856511  335513 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1010 18:22:12.873148  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1010 18:22:12.873175  335513 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1010 18:22:12.888146  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1010 18:22:12.888174  335513 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1010 18:22:12.903848  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1010 18:22:12.903872  335513 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1010 18:22:12.922065  335513 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1010 18:22:13.877182  335513 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 18:22:13.877224  335513 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 18:22:13.877242  335513 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1010 18:22:13.912048  335513 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 18:22:13.912094  335513 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 18:22:14.206894  335513 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1010 18:22:14.212024  335513 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 18:22:14.212069  335513 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 18:22:14.404024  335513 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.640853795s)
	I1010 18:22:14.404112  335513 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.629010505s)
	I1010 18:22:14.404217  335513 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.482117938s)
	I1010 18:22:14.406078  335513 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-121129 addons enable metrics-server
	
	I1010 18:22:14.415455  335513 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1010 18:22:14.416687  335513 addons.go:514] duration metric: took 1.885368042s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1010 18:22:14.706642  335513 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1010 18:22:14.710574  335513 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 18:22:14.710598  335513 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 18:22:15.206122  335513 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1010 18:22:15.211233  335513 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1010 18:22:15.212217  335513 api_server.go:141] control plane version: v1.34.1
	I1010 18:22:15.212245  335513 api_server.go:131] duration metric: took 2.506207886s to wait for apiserver health ...
	I1010 18:22:15.212254  335513 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 18:22:15.216033  335513 system_pods.go:59] 8 kube-system pods found
	I1010 18:22:15.216081  335513 system_pods.go:61] "coredns-66bc5c9577-bbxwj" [54b0d9c6-555f-476b-90d2-aca531478020] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1010 18:22:15.216094  335513 system_pods.go:61] "etcd-newest-cni-121129" [24b69503-efe0-4418-b656-58b90f7d7420] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 18:22:15.216110  335513 system_pods.go:61] "kindnet-9ml5n" [22d3f2b7-d65b-4c8e-a02f-58ead02d9794] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1010 18:22:15.216124  335513 system_pods.go:61] "kube-apiserver-newest-cni-121129" [c429c3d5-c663-453e-9d48-8eacc534ebf4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 18:22:15.216133  335513 system_pods.go:61] "kube-controller-manager-newest-cni-121129" [5e35e588-6a2a-414e-aea9-4d1d8b7897dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 18:22:15.216142  335513 system_pods.go:61] "kube-proxy-sw4cj" [82e9ec15-44c0-4bfd-8b16-3862f7bb01a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1010 18:22:15.216147  335513 system_pods.go:61] "kube-scheduler-newest-cni-121129" [55bc4998-af60-4c82-a3cc-18ccc57ede90] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 18:22:15.216160  335513 system_pods.go:61] "storage-provisioner" [c4cb75b4-5b40-4243-b3df-fd256cb036f9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1010 18:22:15.216168  335513 system_pods.go:74] duration metric: took 3.909666ms to wait for pod list to return data ...
	I1010 18:22:15.216178  335513 default_sa.go:34] waiting for default service account to be created ...
	I1010 18:22:15.218489  335513 default_sa.go:45] found service account: "default"
	I1010 18:22:15.218507  335513 default_sa.go:55] duration metric: took 2.324261ms for default service account to be created ...
	I1010 18:22:15.218517  335513 kubeadm.go:586] duration metric: took 2.68723566s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1010 18:22:15.218530  335513 node_conditions.go:102] verifying NodePressure condition ...
	I1010 18:22:15.220763  335513 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1010 18:22:15.220790  335513 node_conditions.go:123] node cpu capacity is 8
	I1010 18:22:15.220807  335513 node_conditions.go:105] duration metric: took 2.269966ms to run NodePressure ...
	I1010 18:22:15.220826  335513 start.go:241] waiting for startup goroutines ...
	I1010 18:22:15.220838  335513 start.go:246] waiting for cluster config update ...
	I1010 18:22:15.220851  335513 start.go:255] writing updated cluster config ...
	I1010 18:22:15.221177  335513 ssh_runner.go:195] Run: rm -f paused
	I1010 18:22:15.271095  335513 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1010 18:22:15.273221  335513 out.go:179] * Done! kubectl is now configured to use "newest-cni-121129" cluster and "default" namespace by default
	W1010 18:22:13.657932  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	W1010 18:22:15.658631  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.133089935Z" level=info msg="Running pod sandbox: kube-system/kindnet-9ml5n/POD" id=f84ff61b-3b3a-46b4-820c-de94daf6e2b3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.133179712Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.133753351Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.134384937Z" level=info msg="Ran pod sandbox ba0f1feaaaa83e75962ce8aed522e5db0e495f157bb3189dd739b5c2d05f42dc with infra container: kube-system/kube-proxy-sw4cj/POD" id=0b79577e-a196-4ab8-a5eb-38357f3c90a0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.135400396Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=81023e8b-c2ce-46b2-b792-b275e5cc4d22 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.136113128Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=f84ff61b-3b3a-46b4-820c-de94daf6e2b3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.136150063Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=fcd43586-197c-4397-83e6-ed710b88a411 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.137177773Z" level=info msg="Creating container: kube-system/kube-proxy-sw4cj/kube-proxy" id=857419a0-419a-41b9-897a-00ab96141de8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.137532887Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.137640463Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.138222615Z" level=info msg="Ran pod sandbox 9affa195e6a5e78bf758247bee73ba1280e90596f95d7a99c0e4175e8bec4f99 with infra container: kube-system/kindnet-9ml5n/POD" id=f84ff61b-3b3a-46b4-820c-de94daf6e2b3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.139161663Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=b50cb72a-cf44-4875-b53c-a9e638de52b9 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.140411234Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=f9384bf0-49c9-430b-b742-325c4ca11984 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.141434142Z" level=info msg="Creating container: kube-system/kindnet-9ml5n/kindnet-cni" id=04bcfb28-5a12-4e60-8ae9-8fda864bd6a3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.141830154Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.141944095Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.142261715Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.145024803Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.145605788Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.171447941Z" level=info msg="Created container 54fc410f4ca645d11135ea775c4765a38d35954eaee3c37750a64a1cf07f2ee7: kube-system/kindnet-9ml5n/kindnet-cni" id=04bcfb28-5a12-4e60-8ae9-8fda864bd6a3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.172103841Z" level=info msg="Starting container: 54fc410f4ca645d11135ea775c4765a38d35954eaee3c37750a64a1cf07f2ee7" id=bfd61f8f-1d36-4cea-9ff0-00577e8607eb name=/runtime.v1.RuntimeService/StartContainer
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.172881118Z" level=info msg="Created container c6fa99fb85d5287d72da51a3db60987a29f6798385a580361fb3660a26987be3: kube-system/kube-proxy-sw4cj/kube-proxy" id=857419a0-419a-41b9-897a-00ab96141de8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.173464522Z" level=info msg="Starting container: c6fa99fb85d5287d72da51a3db60987a29f6798385a580361fb3660a26987be3" id=ecacf92e-ebbc-4f1b-af26-3f820bb5d6d0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.174189355Z" level=info msg="Started container" PID=1054 containerID=54fc410f4ca645d11135ea775c4765a38d35954eaee3c37750a64a1cf07f2ee7 description=kube-system/kindnet-9ml5n/kindnet-cni id=bfd61f8f-1d36-4cea-9ff0-00577e8607eb name=/runtime.v1.RuntimeService/StartContainer sandboxID=9affa195e6a5e78bf758247bee73ba1280e90596f95d7a99c0e4175e8bec4f99
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.176671811Z" level=info msg="Started container" PID=1055 containerID=c6fa99fb85d5287d72da51a3db60987a29f6798385a580361fb3660a26987be3 description=kube-system/kube-proxy-sw4cj/kube-proxy id=ecacf92e-ebbc-4f1b-af26-3f820bb5d6d0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ba0f1feaaaa83e75962ce8aed522e5db0e495f157bb3189dd739b5c2d05f42dc
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	54fc410f4ca64       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   3 seconds ago       Running             kindnet-cni               1                   9affa195e6a5e       kindnet-9ml5n                               kube-system
	c6fa99fb85d52       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   3 seconds ago       Running             kube-proxy                1                   ba0f1feaaaa83       kube-proxy-sw4cj                            kube-system
	7f03778cf9929       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   6 seconds ago       Running             etcd                      1                   6a0df69c3e1a3       etcd-newest-cni-121129                      kube-system
	bf112ce4d768b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   6 seconds ago       Running             kube-controller-manager   1                   72ec89047afac       kube-controller-manager-newest-cni-121129   kube-system
	ab61bce748bfc       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   6 seconds ago       Running             kube-scheduler            1                   16da27916f279       kube-scheduler-newest-cni-121129            kube-system
	ef0f16a1ff912       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   6 seconds ago       Running             kube-apiserver            1                   528a5198ca4aa       kube-apiserver-newest-cni-121129            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-121129
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-121129
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad692bf4ab89f0e135b80e730ae25010479ecc46
	                    minikube.k8s.io/name=newest-cni-121129
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_10T18_21_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 10 Oct 2025 18:21:51 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-121129
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 10 Oct 2025 18:22:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 10 Oct 2025 18:22:13 +0000   Fri, 10 Oct 2025 18:21:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 10 Oct 2025 18:22:13 +0000   Fri, 10 Oct 2025 18:21:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 10 Oct 2025 18:22:13 +0000   Fri, 10 Oct 2025 18:21:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 10 Oct 2025 18:22:13 +0000   Fri, 10 Oct 2025 18:21:49 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-121129
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 6694834041ede3e9eb1b67e168e90e0c
	  System UUID:                88739fa5-1b3b-4f5e-adda-c7c74720b2ef
	  Boot ID:                    830c8438-99e6-48ba-b543-66e651cad0c8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-121129                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         24s
	  kube-system                 kindnet-9ml5n                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19s
	  kube-system                 kube-apiserver-newest-cni-121129             250m (3%)     0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-controller-manager-newest-cni-121129    200m (2%)     0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-proxy-sw4cj                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         19s
	  kube-system                 kube-scheduler-newest-cni-121129             100m (1%)     0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 18s   kube-proxy       
	  Normal  Starting                 3s    kube-proxy       
	  Normal  Starting                 25s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s   kubelet          Node newest-cni-121129 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s   kubelet          Node newest-cni-121129 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s   kubelet          Node newest-cni-121129 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           20s   node-controller  Node newest-cni-121129 event: Registered Node newest-cni-121129 in Controller
	  Normal  RegisteredNode           1s    node-controller  Node newest-cni-121129 event: Registered Node newest-cni-121129 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 95 0c 3e 92 2e 08 06
	[  +0.052845] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 a5 06 76 2d e3 08 06
	[ +11.354316] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff fa c6 ff 04 55 d6 08 06
	[  +7.101927] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e6 9b 73 27 8c 80 08 06
	[  +0.000350] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 a5 06 76 2d e3 08 06
	[  +6.287191] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 27 2d 28 d6 46 08 06
	[  +0.000293] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa c6 ff 04 55 d6 08 06
	[Oct10 18:19] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 8c 22 f6 6b cf 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e 29 bf 13 20 f9 08 06
	[ +15.511156] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e d6 74 aa 27 d0 08 06
	[  +0.008495] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 af 05 d4 db d1 08 06
	[Oct10 18:20] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 0b 54 33 52 4e 08 06
	[  +0.000597] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 af 05 d4 db d1 08 06
	
	
	==> etcd [7f03778cf9929180d97b99c5a7dabc1b07cab95c28d238404c4b3cdda1350b21] <==
	{"level":"warn","ts":"2025-10-10T18:22:13.254893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.265190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.272013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.280042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.286382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.293960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.300488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.306801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.313246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.319626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.327162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.334531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.341941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.349137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.356484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.364261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.375141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.385270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.392767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.401047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.408816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.423422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.431665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.438478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.481623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50586","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:22:18 up  1:04,  0 user,  load average: 4.65, 4.58, 3.00
	Linux newest-cni-121129 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [54fc410f4ca645d11135ea775c4765a38d35954eaee3c37750a64a1cf07f2ee7] <==
	I1010 18:22:15.338713       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1010 18:22:15.338952       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1010 18:22:15.339145       1 main.go:148] setting mtu 1500 for CNI 
	I1010 18:22:15.339162       1 main.go:178] kindnetd IP family: "ipv4"
	I1010 18:22:15.339182       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-10T18:22:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1010 18:22:15.543117       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1010 18:22:15.543185       1 controller.go:381] "Waiting for informer caches to sync"
	I1010 18:22:15.543198       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1010 18:22:15.543443       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1010 18:22:15.943914       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1010 18:22:15.943945       1 metrics.go:72] Registering metrics
	I1010 18:22:15.944036       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [ef0f16a1ff912c99555175b679ca7c2499386f3f7c4b4c9a7270a180e8c15937] <==
	I1010 18:22:13.968307       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1010 18:22:13.968336       1 aggregator.go:171] initial CRD sync complete...
	I1010 18:22:13.968346       1 autoregister_controller.go:144] Starting autoregister controller
	I1010 18:22:13.968352       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1010 18:22:13.968357       1 cache.go:39] Caches are synced for autoregister controller
	I1010 18:22:13.967936       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1010 18:22:13.968505       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1010 18:22:13.968945       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1010 18:22:13.969038       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1010 18:22:13.973607       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1010 18:22:13.980381       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1010 18:22:13.997759       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1010 18:22:14.020035       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1010 18:22:14.207397       1 controller.go:667] quota admission added evaluator for: namespaces
	I1010 18:22:14.237762       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1010 18:22:14.257977       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1010 18:22:14.266612       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1010 18:22:14.273400       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1010 18:22:14.305483       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.206.163"}
	I1010 18:22:14.315876       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.250.116"}
	I1010 18:22:14.871371       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1010 18:22:17.545200       1 controller.go:667] quota admission added evaluator for: endpoints
	I1010 18:22:17.645389       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1010 18:22:17.694307       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1010 18:22:17.694307       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [bf112ce4d768b53d1e90f30761c3ce870d54e55a9c7241326c2c1e377046fb0b] <==
	I1010 18:22:17.252092       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1010 18:22:17.254918       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1010 18:22:17.257157       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1010 18:22:17.259437       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1010 18:22:17.260613       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1010 18:22:17.271799       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1010 18:22:17.271904       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1010 18:22:17.271959       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1010 18:22:17.271977       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1010 18:22:17.271984       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1010 18:22:17.272974       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1010 18:22:17.275274       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1010 18:22:17.291797       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1010 18:22:17.291823       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1010 18:22:17.291916       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1010 18:22:17.292985       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1010 18:22:17.293013       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1010 18:22:17.293030       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1010 18:22:17.293119       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1010 18:22:17.293150       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1010 18:22:17.293177       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1010 18:22:17.297735       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1010 18:22:17.298936       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1010 18:22:17.309110       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1010 18:22:17.314279       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [c6fa99fb85d5287d72da51a3db60987a29f6798385a580361fb3660a26987be3] <==
	I1010 18:22:15.210674       1 server_linux.go:53] "Using iptables proxy"
	I1010 18:22:15.279204       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1010 18:22:15.380169       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1010 18:22:15.380219       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1010 18:22:15.380314       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1010 18:22:15.400762       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1010 18:22:15.400823       1 server_linux.go:132] "Using iptables Proxier"
	I1010 18:22:15.405998       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1010 18:22:15.406404       1 server.go:527] "Version info" version="v1.34.1"
	I1010 18:22:15.406446       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 18:22:15.407795       1 config.go:106] "Starting endpoint slice config controller"
	I1010 18:22:15.407817       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1010 18:22:15.407886       1 config.go:200] "Starting service config controller"
	I1010 18:22:15.407907       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1010 18:22:15.407923       1 config.go:403] "Starting serviceCIDR config controller"
	I1010 18:22:15.407928       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1010 18:22:15.408025       1 config.go:309] "Starting node config controller"
	I1010 18:22:15.408033       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1010 18:22:15.408039       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1010 18:22:15.508662       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1010 18:22:15.508802       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1010 18:22:15.508797       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [ab61bce748bfcc69bd3fc766155054b877fa6b8c7695ee04c693a7820d3e6b33] <==
	I1010 18:22:13.128224       1 serving.go:386] Generated self-signed cert in-memory
	W1010 18:22:13.886343       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1010 18:22:13.886378       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1010 18:22:13.886390       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1010 18:22:13.886399       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1010 18:22:13.923149       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1010 18:22:13.923179       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 18:22:13.934839       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1010 18:22:13.934934       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1010 18:22:13.935019       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1010 18:22:13.935210       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1010 18:22:14.035837       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 10 18:22:13 newest-cni-121129 kubelet[679]: E1010 18:22:13.864312     679 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-121129\" not found" node="newest-cni-121129"
	Oct 10 18:22:13 newest-cni-121129 kubelet[679]: I1010 18:22:13.928752     679 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-121129"
	Oct 10 18:22:13 newest-cni-121129 kubelet[679]: E1010 18:22:13.942346     679 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-121129\" already exists" pod="kube-system/etcd-newest-cni-121129"
	Oct 10 18:22:13 newest-cni-121129 kubelet[679]: I1010 18:22:13.942390     679 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-121129"
	Oct 10 18:22:13 newest-cni-121129 kubelet[679]: E1010 18:22:13.948522     679 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-121129\" already exists" pod="kube-system/kube-apiserver-newest-cni-121129"
	Oct 10 18:22:13 newest-cni-121129 kubelet[679]: I1010 18:22:13.948552     679 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-121129"
	Oct 10 18:22:13 newest-cni-121129 kubelet[679]: E1010 18:22:13.954601     679 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-121129\" already exists" pod="kube-system/kube-controller-manager-newest-cni-121129"
	Oct 10 18:22:13 newest-cni-121129 kubelet[679]: I1010 18:22:13.954638     679 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-121129"
	Oct 10 18:22:13 newest-cni-121129 kubelet[679]: E1010 18:22:13.959151     679 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-121129\" already exists" pod="kube-system/kube-scheduler-newest-cni-121129"
	Oct 10 18:22:13 newest-cni-121129 kubelet[679]: I1010 18:22:13.992022     679 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-121129"
	Oct 10 18:22:13 newest-cni-121129 kubelet[679]: I1010 18:22:13.992144     679 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-121129"
	Oct 10 18:22:13 newest-cni-121129 kubelet[679]: I1010 18:22:13.992185     679 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 10 18:22:13 newest-cni-121129 kubelet[679]: I1010 18:22:13.993076     679 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 10 18:22:14 newest-cni-121129 kubelet[679]: I1010 18:22:14.823611     679 apiserver.go:52] "Watching apiserver"
	Oct 10 18:22:14 newest-cni-121129 kubelet[679]: I1010 18:22:14.828957     679 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 10 18:22:14 newest-cni-121129 kubelet[679]: I1010 18:22:14.865444     679 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-121129"
	Oct 10 18:22:14 newest-cni-121129 kubelet[679]: E1010 18:22:14.871774     679 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-121129\" already exists" pod="kube-system/kube-scheduler-newest-cni-121129"
	Oct 10 18:22:14 newest-cni-121129 kubelet[679]: I1010 18:22:14.901249     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22d3f2b7-d65b-4c8e-a02f-58ead02d9794-xtables-lock\") pod \"kindnet-9ml5n\" (UID: \"22d3f2b7-d65b-4c8e-a02f-58ead02d9794\") " pod="kube-system/kindnet-9ml5n"
	Oct 10 18:22:14 newest-cni-121129 kubelet[679]: I1010 18:22:14.901304     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22d3f2b7-d65b-4c8e-a02f-58ead02d9794-lib-modules\") pod \"kindnet-9ml5n\" (UID: \"22d3f2b7-d65b-4c8e-a02f-58ead02d9794\") " pod="kube-system/kindnet-9ml5n"
	Oct 10 18:22:14 newest-cni-121129 kubelet[679]: I1010 18:22:14.901330     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82e9ec15-44c0-4bfd-8b16-3862f7bb01a6-xtables-lock\") pod \"kube-proxy-sw4cj\" (UID: \"82e9ec15-44c0-4bfd-8b16-3862f7bb01a6\") " pod="kube-system/kube-proxy-sw4cj"
	Oct 10 18:22:14 newest-cni-121129 kubelet[679]: I1010 18:22:14.901355     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/22d3f2b7-d65b-4c8e-a02f-58ead02d9794-cni-cfg\") pod \"kindnet-9ml5n\" (UID: \"22d3f2b7-d65b-4c8e-a02f-58ead02d9794\") " pod="kube-system/kindnet-9ml5n"
	Oct 10 18:22:14 newest-cni-121129 kubelet[679]: I1010 18:22:14.901432     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82e9ec15-44c0-4bfd-8b16-3862f7bb01a6-lib-modules\") pod \"kube-proxy-sw4cj\" (UID: \"82e9ec15-44c0-4bfd-8b16-3862f7bb01a6\") " pod="kube-system/kube-proxy-sw4cj"
	Oct 10 18:22:16 newest-cni-121129 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 10 18:22:16 newest-cni-121129 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 10 18:22:16 newest-cni-121129 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-121129 -n newest-cni-121129
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-121129 -n newest-cni-121129: exit status 2 (307.266217ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-121129 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-bbxwj storage-provisioner dashboard-metrics-scraper-6ffb444bf9-fgsgr kubernetes-dashboard-855c9754f9-95jxx
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-121129 describe pod coredns-66bc5c9577-bbxwj storage-provisioner dashboard-metrics-scraper-6ffb444bf9-fgsgr kubernetes-dashboard-855c9754f9-95jxx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-121129 describe pod coredns-66bc5c9577-bbxwj storage-provisioner dashboard-metrics-scraper-6ffb444bf9-fgsgr kubernetes-dashboard-855c9754f9-95jxx: exit status 1 (61.782594ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-bbxwj" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-fgsgr" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-95jxx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-121129 describe pod coredns-66bc5c9577-bbxwj storage-provisioner dashboard-metrics-scraper-6ffb444bf9-fgsgr kubernetes-dashboard-855c9754f9-95jxx: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-121129
helpers_test.go:243: (dbg) docker inspect newest-cni-121129:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "44f7c2aef6cca1948267e8f9a581073b0895bb8f16e8949e0db91550afe6a3a3",
	        "Created": "2025-10-10T18:21:36.293708252Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 335711,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-10T18:22:05.531946344Z",
	            "FinishedAt": "2025-10-10T18:22:04.735261768Z"
	        },
	        "Image": "sha256:84da1fc78d37190122f56c520913b0bfc454516bc5fdbdc209e2a5258afce8c3",
	        "ResolvConfPath": "/var/lib/docker/containers/44f7c2aef6cca1948267e8f9a581073b0895bb8f16e8949e0db91550afe6a3a3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/44f7c2aef6cca1948267e8f9a581073b0895bb8f16e8949e0db91550afe6a3a3/hostname",
	        "HostsPath": "/var/lib/docker/containers/44f7c2aef6cca1948267e8f9a581073b0895bb8f16e8949e0db91550afe6a3a3/hosts",
	        "LogPath": "/var/lib/docker/containers/44f7c2aef6cca1948267e8f9a581073b0895bb8f16e8949e0db91550afe6a3a3/44f7c2aef6cca1948267e8f9a581073b0895bb8f16e8949e0db91550afe6a3a3-json.log",
	        "Name": "/newest-cni-121129",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-121129:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-121129",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "44f7c2aef6cca1948267e8f9a581073b0895bb8f16e8949e0db91550afe6a3a3",
	                "LowerDir": "/var/lib/docker/overlay2/bf33ff5e2644ca9451feb6b194bd1b9cfcbf6459017f40bc6397e87fbe55a746-init/diff:/var/lib/docker/overlay2/9995a0af7efc4d83e8e62526a6cf13ffc5df3bab5cee59077c863040f7e3e58d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bf33ff5e2644ca9451feb6b194bd1b9cfcbf6459017f40bc6397e87fbe55a746/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bf33ff5e2644ca9451feb6b194bd1b9cfcbf6459017f40bc6397e87fbe55a746/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bf33ff5e2644ca9451feb6b194bd1b9cfcbf6459017f40bc6397e87fbe55a746/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-121129",
	                "Source": "/var/lib/docker/volumes/newest-cni-121129/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-121129",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-121129",
	                "name.minikube.sigs.k8s.io": "newest-cni-121129",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fd56a25fe7e11b70e4d92f61dd44312b28506ce3af2f371d45f8f1d2a04970f5",
	            "SandboxKey": "/var/run/docker/netns/fd56a25fe7e1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-121129": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:e6:da:24:33:14",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cd26f66f7d0715bf666ca6e5dc6891adf394cc9a58fe404ddf68c49d82b6f4c2",
	                    "EndpointID": "0d9d37391289d68345f704bec905cd529100684288cd1282575d1b546a235959",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-121129",
	                        "44f7c2aef6cc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-121129 -n newest-cni-121129
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-121129 -n newest-cni-121129: exit status 2 (303.827362ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-121129 logs -n 25
E1010 18:22:20.460277    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/auto-078032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-556024 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:20 UTC │
	│ start   │ -p no-preload-556024 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:20 UTC │ 10 Oct 25 18:21 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-821769 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-821769 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ image   │ old-k8s-version-141193 image list --format=json                                                                                                                                                                                               │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ pause   │ -p old-k8s-version-141193 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ delete  │ -p old-k8s-version-141193                                                                                                                                                                                                                     │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ delete  │ -p old-k8s-version-141193                                                                                                                                                                                                                     │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ start   │ -p newest-cni-121129 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-821769 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ start   │ -p default-k8s-diff-port-821769 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ image   │ no-preload-556024 image list --format=json                                                                                                                                                                                                    │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ pause   │ -p no-preload-556024 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ image   │ embed-certs-472518 image list --format=json                                                                                                                                                                                                   │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ pause   │ -p embed-certs-472518 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ delete  │ -p no-preload-556024                                                                                                                                                                                                                          │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ delete  │ -p no-preload-556024                                                                                                                                                                                                                          │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ delete  │ -p embed-certs-472518                                                                                                                                                                                                                         │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ delete  │ -p embed-certs-472518                                                                                                                                                                                                                         │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ addons  │ enable metrics-server -p newest-cni-121129 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:22 UTC │                     │
	│ stop    │ -p newest-cni-121129 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:22 UTC │ 10 Oct 25 18:22 UTC │
	│ addons  │ enable dashboard -p newest-cni-121129 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:22 UTC │ 10 Oct 25 18:22 UTC │
	│ start   │ -p newest-cni-121129 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:22 UTC │ 10 Oct 25 18:22 UTC │
	│ image   │ newest-cni-121129 image list --format=json                                                                                                                                                                                                    │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:22 UTC │ 10 Oct 25 18:22 UTC │
	│ pause   │ -p newest-cni-121129 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/10 18:22:05
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 18:22:05.290569  335513 out.go:360] Setting OutFile to fd 1 ...
	I1010 18:22:05.290861  335513 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:22:05.290870  335513 out.go:374] Setting ErrFile to fd 2...
	I1010 18:22:05.290877  335513 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:22:05.291147  335513 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 18:22:05.291697  335513 out.go:368] Setting JSON to false
	I1010 18:22:05.292906  335513 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3865,"bootTime":1760116660,"procs":265,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 18:22:05.293008  335513 start.go:141] virtualization: kvm guest
	I1010 18:22:05.294961  335513 out.go:179] * [newest-cni-121129] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1010 18:22:05.296259  335513 out.go:179]   - MINIKUBE_LOCATION=21724
	I1010 18:22:05.296288  335513 notify.go:220] Checking for updates...
	I1010 18:22:05.298639  335513 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 18:22:05.299676  335513 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:22:05.300690  335513 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-5815/.minikube
	I1010 18:22:05.301797  335513 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 18:22:05.302929  335513 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 18:22:05.307755  335513 config.go:182] Loaded profile config "newest-cni-121129": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:22:05.308318  335513 driver.go:421] Setting default libvirt URI to qemu:///system
	I1010 18:22:05.332954  335513 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1010 18:22:05.333071  335513 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:22:05.393251  335513 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-10 18:22:05.383186457 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:22:05.393422  335513 docker.go:318] overlay module found
	I1010 18:22:05.395166  335513 out.go:179] * Using the docker driver based on existing profile
	I1010 18:22:05.396306  335513 start.go:305] selected driver: docker
	I1010 18:22:05.396321  335513 start.go:925] validating driver "docker" against &{Name:newest-cni-121129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:22:05.396438  335513 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 18:22:05.397122  335513 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:22:05.458840  335513 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-10 18:22:05.448230468 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:22:05.459176  335513 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1010 18:22:05.459216  335513 cni.go:84] Creating CNI manager for ""
	I1010 18:22:05.459260  335513 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:22:05.459302  335513 start.go:349] cluster config:
	{Name:newest-cni-121129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:22:05.461891  335513 out.go:179] * Starting "newest-cni-121129" primary control-plane node in "newest-cni-121129" cluster
	I1010 18:22:05.462953  335513 cache.go:123] Beginning downloading kic base image for docker with crio
	I1010 18:22:05.464080  335513 out.go:179] * Pulling base image v0.0.48-1760103811-21724 ...
	I1010 18:22:05.465182  335513 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:22:05.465219  335513 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1010 18:22:05.465239  335513 cache.go:58] Caching tarball of preloaded images
	I1010 18:22:05.465271  335513 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon
	I1010 18:22:05.465353  335513 preload.go:233] Found /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 18:22:05.465368  335513 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1010 18:22:05.465464  335513 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/config.json ...
	I1010 18:22:05.486563  335513 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon, skipping pull
	I1010 18:22:05.486586  335513 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 exists in daemon, skipping load
	I1010 18:22:05.486605  335513 cache.go:232] Successfully downloaded all kic artifacts
	I1010 18:22:05.486632  335513 start.go:360] acquireMachinesLock for newest-cni-121129: {Name:mkd067d67013b78a79cc31e2d50fcfd69790fc6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:22:05.486702  335513 start.go:364] duration metric: took 48.282µs to acquireMachinesLock for "newest-cni-121129"
	I1010 18:22:05.486725  335513 start.go:96] Skipping create...Using existing machine configuration
	I1010 18:22:05.486733  335513 fix.go:54] fixHost starting: 
	I1010 18:22:05.486937  335513 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:22:05.505160  335513 fix.go:112] recreateIfNeeded on newest-cni-121129: state=Stopped err=<nil>
	W1010 18:22:05.505189  335513 fix.go:138] unexpected machine state, will restart: <nil>
	W1010 18:22:02.659629  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	W1010 18:22:04.660067  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	I1010 18:22:05.506971  335513 out.go:252] * Restarting existing docker container for "newest-cni-121129" ...
	I1010 18:22:05.507082  335513 cli_runner.go:164] Run: docker start newest-cni-121129
	I1010 18:22:05.744340  335513 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:22:05.763128  335513 kic.go:430] container "newest-cni-121129" state is running.
	I1010 18:22:05.763484  335513 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-121129
	I1010 18:22:05.782418  335513 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/config.json ...
	I1010 18:22:05.782704  335513 machine.go:93] provisionDockerMachine start ...
	I1010 18:22:05.782787  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:05.801168  335513 main.go:141] libmachine: Using SSH client type: native
	I1010 18:22:05.801379  335513 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1010 18:22:05.801392  335513 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 18:22:05.802022  335513 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53092->127.0.0.1:33133: read: connection reset by peer
	I1010 18:22:08.938319  335513 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-121129
	
	I1010 18:22:08.938347  335513 ubuntu.go:182] provisioning hostname "newest-cni-121129"
	I1010 18:22:08.938432  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:08.956809  335513 main.go:141] libmachine: Using SSH client type: native
	I1010 18:22:08.957009  335513 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1010 18:22:08.957024  335513 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-121129 && echo "newest-cni-121129" | sudo tee /etc/hostname
	I1010 18:22:09.102637  335513 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-121129
	
	I1010 18:22:09.102708  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:09.121495  335513 main.go:141] libmachine: Using SSH client type: native
	I1010 18:22:09.121708  335513 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1010 18:22:09.121725  335513 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-121129' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-121129/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-121129' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:22:09.255802  335513 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:22:09.255838  335513 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-5815/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-5815/.minikube}
	I1010 18:22:09.255877  335513 ubuntu.go:190] setting up certificates
	I1010 18:22:09.255893  335513 provision.go:84] configureAuth start
	I1010 18:22:09.255959  335513 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-121129
	I1010 18:22:09.273223  335513 provision.go:143] copyHostCerts
	I1010 18:22:09.273280  335513 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem, removing ...
	I1010 18:22:09.273293  335513 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem
	I1010 18:22:09.273359  335513 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem (1082 bytes)
	I1010 18:22:09.273459  335513 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem, removing ...
	I1010 18:22:09.273468  335513 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem
	I1010 18:22:09.273494  335513 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem (1123 bytes)
	I1010 18:22:09.273561  335513 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem, removing ...
	I1010 18:22:09.273568  335513 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem
	I1010 18:22:09.273591  335513 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem (1675 bytes)
	I1010 18:22:09.273652  335513 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem org=jenkins.newest-cni-121129 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-121129]
	I1010 18:22:09.612120  335513 provision.go:177] copyRemoteCerts
	I1010 18:22:09.612187  335513 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:22:09.612221  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:09.629812  335513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:22:09.726962  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1010 18:22:09.746555  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1010 18:22:09.766845  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 18:22:09.786986  335513 provision.go:87] duration metric: took 531.066176ms to configureAuth
	I1010 18:22:09.787015  335513 ubuntu.go:206] setting minikube options for container-runtime
	I1010 18:22:09.787209  335513 config.go:182] Loaded profile config "newest-cni-121129": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:22:09.787337  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:09.805200  335513 main.go:141] libmachine: Using SSH client type: native
	I1010 18:22:09.805389  335513 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1010 18:22:09.805406  335513 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:22:10.098222  335513 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:22:10.098249  335513 machine.go:96] duration metric: took 4.31552528s to provisionDockerMachine
	I1010 18:22:10.098261  335513 start.go:293] postStartSetup for "newest-cni-121129" (driver="docker")
	I1010 18:22:10.098276  335513 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:22:10.098357  335513 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:22:10.098407  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:10.115908  335513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:22:10.213790  335513 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:22:10.217524  335513 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1010 18:22:10.217553  335513 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1010 18:22:10.217567  335513 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/addons for local assets ...
	I1010 18:22:10.217636  335513 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/files for local assets ...
	I1010 18:22:10.217740  335513 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem -> 93542.pem in /etc/ssl/certs
	I1010 18:22:10.217864  335513 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:22:10.226684  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:22:10.248076  335513 start.go:296] duration metric: took 149.799111ms for postStartSetup
	I1010 18:22:10.248178  335513 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1010 18:22:10.248226  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:10.266300  335513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:22:10.360213  335513 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1010 18:22:10.364797  335513 fix.go:56] duration metric: took 4.878059137s for fixHost
	I1010 18:22:10.364821  335513 start.go:83] releasing machines lock for "newest-cni-121129", held for 4.878105914s
	I1010 18:22:10.364878  335513 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-121129
	I1010 18:22:10.383110  335513 ssh_runner.go:195] Run: cat /version.json
	I1010 18:22:10.383169  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:10.383208  335513 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:22:10.383290  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:10.401694  335513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:22:10.402069  335513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:22:10.549004  335513 ssh_runner.go:195] Run: systemctl --version
	I1010 18:22:10.555440  335513 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:22:10.589903  335513 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:22:10.594487  335513 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:22:10.594552  335513 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:22:10.603402  335513 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1010 18:22:10.603427  335513 start.go:495] detecting cgroup driver to use...
	I1010 18:22:10.603462  335513 detect.go:190] detected "systemd" cgroup driver on host os
	I1010 18:22:10.603516  335513 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:22:10.617988  335513 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:22:10.630757  335513 docker.go:218] disabling cri-docker service (if available) ...
	I1010 18:22:10.630811  335513 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:22:10.645086  335513 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:22:10.659116  335513 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:22:10.739783  335513 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:22:10.821838  335513 docker.go:234] disabling docker service ...
	I1010 18:22:10.821898  335513 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:22:10.836530  335513 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:22:10.849438  335513 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:22:10.926810  335513 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:22:11.009431  335513 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:22:11.022257  335513 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:22:11.037720  335513 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1010 18:22:11.037792  335513 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:22:11.047811  335513 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1010 18:22:11.047875  335513 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:22:11.057692  335513 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:22:11.067884  335513 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:22:11.077525  335513 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:22:11.087002  335513 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:22:11.096828  335513 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:22:11.106020  335513 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:22:11.115595  335513 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:22:11.123688  335513 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:22:11.131885  335513 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:22:11.209940  335513 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:22:11.350816  335513 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:22:11.350877  335513 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:22:11.355096  335513 start.go:563] Will wait 60s for crictl version
	I1010 18:22:11.355145  335513 ssh_runner.go:195] Run: which crictl
	I1010 18:22:11.358770  335513 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1010 18:22:11.384561  335513 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1010 18:22:11.384639  335513 ssh_runner.go:195] Run: crio --version
	I1010 18:22:11.411320  335513 ssh_runner.go:195] Run: crio --version
	I1010 18:22:11.440045  335513 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1010 18:22:06.661425  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	W1010 18:22:09.158000  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	W1010 18:22:11.160422  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	I1010 18:22:11.441103  335513 cli_runner.go:164] Run: docker network inspect newest-cni-121129 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 18:22:11.458538  335513 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1010 18:22:11.462704  335513 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:22:11.475134  335513 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1010 18:22:11.476017  335513 kubeadm.go:883] updating cluster {Name:newest-cni-121129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 18:22:11.476150  335513 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:22:11.476202  335513 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:22:11.507304  335513 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:22:11.507323  335513 crio.go:433] Images already preloaded, skipping extraction
	I1010 18:22:11.507363  335513 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:22:11.533243  335513 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:22:11.533265  335513 cache_images.go:85] Images are preloaded, skipping loading
	I1010 18:22:11.533272  335513 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1010 18:22:11.533353  335513 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-121129 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:22:11.533416  335513 ssh_runner.go:195] Run: crio config
	I1010 18:22:11.578761  335513 cni.go:84] Creating CNI manager for ""
	I1010 18:22:11.578789  335513 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:22:11.578804  335513 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1010 18:22:11.578824  335513 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-121129 NodeName:newest-cni-121129 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 18:22:11.578929  335513 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-121129"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 18:22:11.578984  335513 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1010 18:22:11.587839  335513 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 18:22:11.587894  335513 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 18:22:11.596414  335513 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1010 18:22:11.610238  335513 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:22:11.623960  335513 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1010 18:22:11.637763  335513 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1010 18:22:11.641378  335513 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:22:11.652228  335513 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:22:11.733285  335513 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:22:11.757177  335513 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129 for IP: 192.168.85.2
	I1010 18:22:11.757199  335513 certs.go:195] generating shared ca certs ...
	I1010 18:22:11.757219  335513 certs.go:227] acquiring lock for ca certs: {Name:mkd2ebf34e0d6ec3a7809bed8325fdc7fe2fcc31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:22:11.757370  335513 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key
	I1010 18:22:11.757429  335513 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key
	I1010 18:22:11.757441  335513 certs.go:257] generating profile certs ...
	I1010 18:22:11.757572  335513 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.key
	I1010 18:22:11.757653  335513 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key.89f266b7
	I1010 18:22:11.757703  335513 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.key
	I1010 18:22:11.757835  335513 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem (1338 bytes)
	W1010 18:22:11.757872  335513 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354_empty.pem, impossibly tiny 0 bytes
	I1010 18:22:11.757885  335513 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem (1675 bytes)
	I1010 18:22:11.757915  335513 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem (1082 bytes)
	I1010 18:22:11.757954  335513 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:22:11.757981  335513 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem (1675 bytes)
	I1010 18:22:11.758033  335513 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:22:11.758775  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:22:11.778857  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:22:11.801208  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:22:11.824760  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1010 18:22:11.851378  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1010 18:22:11.870850  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 18:22:11.889951  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:22:11.908879  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 18:22:11.928343  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /usr/share/ca-certificates/93542.pem (1708 bytes)
	I1010 18:22:11.948375  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:22:11.969276  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem --> /usr/share/ca-certificates/9354.pem (1338 bytes)
	I1010 18:22:11.988998  335513 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 18:22:12.003247  335513 ssh_runner.go:195] Run: openssl version
	I1010 18:22:12.009554  335513 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:22:12.018800  335513 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:22:12.022724  335513 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:22:12.022777  335513 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:22:12.057604  335513 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:22:12.067287  335513 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9354.pem && ln -fs /usr/share/ca-certificates/9354.pem /etc/ssl/certs/9354.pem"
	I1010 18:22:12.076762  335513 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9354.pem
	I1010 18:22:12.080550  335513 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 17:36 /usr/share/ca-certificates/9354.pem
	I1010 18:22:12.080594  335513 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9354.pem
	I1010 18:22:12.114518  335513 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9354.pem /etc/ssl/certs/51391683.0"
	I1010 18:22:12.123583  335513 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93542.pem && ln -fs /usr/share/ca-certificates/93542.pem /etc/ssl/certs/93542.pem"
	I1010 18:22:12.132960  335513 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93542.pem
	I1010 18:22:12.137033  335513 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 17:36 /usr/share/ca-certificates/93542.pem
	I1010 18:22:12.137103  335513 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93542.pem
	I1010 18:22:12.172587  335513 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93542.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:22:12.181976  335513 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:22:12.185849  335513 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 18:22:12.220072  335513 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 18:22:12.255822  335513 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 18:22:12.300141  335513 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 18:22:12.343441  335513 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 18:22:12.393734  335513 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 18:22:12.454003  335513 kubeadm.go:400] StartCluster: {Name:newest-cni-121129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:22:12.454110  335513 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 18:22:12.454196  335513 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 18:22:12.487305  335513 cri.go:89] found id: "7f03778cf9929180d97b99c5a7dabc1b07cab95c28d238404c4b3cdda1350b21"
	I1010 18:22:12.487332  335513 cri.go:89] found id: "bf112ce4d768b53d1e90f30761c3ce870d54e55a9c7241326c2c1e377046fb0b"
	I1010 18:22:12.487338  335513 cri.go:89] found id: "ab61bce748bfcc69bd3fc766155054b877fa6b8c7695ee04c693a7820d3e6b33"
	I1010 18:22:12.487343  335513 cri.go:89] found id: "ef0f16a1ff912c99555175b679ca7c2499386f3f7c4b4c9a7270a180e8c15937"
	I1010 18:22:12.487347  335513 cri.go:89] found id: ""
	I1010 18:22:12.487394  335513 ssh_runner.go:195] Run: sudo runc list -f json
	W1010 18:22:12.500489  335513 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:22:12Z" level=error msg="open /run/runc: no such file or directory"
	I1010 18:22:12.500556  335513 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 18:22:12.509425  335513 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1010 18:22:12.509447  335513 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1010 18:22:12.509493  335513 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 18:22:12.518026  335513 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 18:22:12.518736  335513 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-121129" does not appear in /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:22:12.519045  335513 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-5815/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-121129" cluster setting kubeconfig missing "newest-cni-121129" context setting]
	I1010 18:22:12.519593  335513 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:22:12.520854  335513 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 18:22:12.530013  335513 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1010 18:22:12.530074  335513 kubeadm.go:601] duration metric: took 20.594831ms to restartPrimaryControlPlane
	I1010 18:22:12.530092  335513 kubeadm.go:402] duration metric: took 76.095724ms to StartCluster
	I1010 18:22:12.530115  335513 settings.go:142] acquiring lock: {Name:mk32701f7c6313a55b8740f0862889585a36e8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:22:12.530186  335513 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:22:12.530994  335513 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:22:12.531256  335513 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:22:12.531320  335513 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 18:22:12.531440  335513 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-121129"
	I1010 18:22:12.531451  335513 addons.go:69] Setting dashboard=true in profile "newest-cni-121129"
	I1010 18:22:12.531464  335513 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-121129"
	I1010 18:22:12.531470  335513 addons.go:238] Setting addon dashboard=true in "newest-cni-121129"
	W1010 18:22:12.531473  335513 addons.go:247] addon storage-provisioner should already be in state true
	W1010 18:22:12.531478  335513 addons.go:247] addon dashboard should already be in state true
	I1010 18:22:12.531482  335513 config.go:182] Loaded profile config "newest-cni-121129": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:22:12.531493  335513 addons.go:69] Setting default-storageclass=true in profile "newest-cni-121129"
	I1010 18:22:12.531516  335513 host.go:66] Checking if "newest-cni-121129" exists ...
	I1010 18:22:12.531531  335513 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-121129"
	I1010 18:22:12.531504  335513 host.go:66] Checking if "newest-cni-121129" exists ...
	I1010 18:22:12.531869  335513 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:22:12.532071  335513 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:22:12.532071  335513 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:22:12.535048  335513 out.go:179] * Verifying Kubernetes components...
	I1010 18:22:12.536132  335513 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:22:12.558523  335513 addons.go:238] Setting addon default-storageclass=true in "newest-cni-121129"
	W1010 18:22:12.558549  335513 addons.go:247] addon default-storageclass should already be in state true
	I1010 18:22:12.558578  335513 host.go:66] Checking if "newest-cni-121129" exists ...
	I1010 18:22:12.558631  335513 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1010 18:22:12.559047  335513 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:22:12.559747  335513 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:22:12.560780  335513 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1010 18:22:12.560840  335513 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:22:12.560860  335513 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 18:22:12.560910  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:12.565594  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1010 18:22:12.565614  335513 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1010 18:22:12.565676  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:12.591733  335513 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 18:22:12.591757  335513 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 18:22:12.591901  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:12.596614  335513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:22:12.597384  335513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:22:12.615727  335513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:22:12.677916  335513 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:22:12.692091  335513 api_server.go:52] waiting for apiserver process to appear ...
	I1010 18:22:12.692167  335513 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 18:22:12.705996  335513 api_server.go:72] duration metric: took 174.708821ms to wait for apiserver process to appear ...
	I1010 18:22:12.706031  335513 api_server.go:88] waiting for apiserver healthz status ...
	I1010 18:22:12.706074  335513 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1010 18:22:12.762093  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1010 18:22:12.762118  335513 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1010 18:22:12.763137  335513 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:22:12.775071  335513 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 18:22:12.780905  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1010 18:22:12.780927  335513 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1010 18:22:12.802455  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1010 18:22:12.802487  335513 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1010 18:22:12.823607  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1010 18:22:12.823636  335513 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1010 18:22:12.839919  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1010 18:22:12.839944  335513 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1010 18:22:12.856483  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1010 18:22:12.856511  335513 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1010 18:22:12.873148  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1010 18:22:12.873175  335513 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1010 18:22:12.888146  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1010 18:22:12.888174  335513 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1010 18:22:12.903848  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1010 18:22:12.903872  335513 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1010 18:22:12.922065  335513 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1010 18:22:13.877182  335513 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 18:22:13.877224  335513 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 18:22:13.877242  335513 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1010 18:22:13.912048  335513 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 18:22:13.912094  335513 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 18:22:14.206894  335513 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1010 18:22:14.212024  335513 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 18:22:14.212069  335513 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 18:22:14.404024  335513 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.640853795s)
	I1010 18:22:14.404112  335513 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.629010505s)
	I1010 18:22:14.404217  335513 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.482117938s)
	I1010 18:22:14.406078  335513 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-121129 addons enable metrics-server
	
	I1010 18:22:14.415455  335513 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1010 18:22:14.416687  335513 addons.go:514] duration metric: took 1.885368042s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1010 18:22:14.706642  335513 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1010 18:22:14.710574  335513 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 18:22:14.710598  335513 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 18:22:15.206122  335513 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1010 18:22:15.211233  335513 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1010 18:22:15.212217  335513 api_server.go:141] control plane version: v1.34.1
	I1010 18:22:15.212245  335513 api_server.go:131] duration metric: took 2.506207886s to wait for apiserver health ...
	I1010 18:22:15.212254  335513 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 18:22:15.216033  335513 system_pods.go:59] 8 kube-system pods found
	I1010 18:22:15.216081  335513 system_pods.go:61] "coredns-66bc5c9577-bbxwj" [54b0d9c6-555f-476b-90d2-aca531478020] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1010 18:22:15.216094  335513 system_pods.go:61] "etcd-newest-cni-121129" [24b69503-efe0-4418-b656-58b90f7d7420] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 18:22:15.216110  335513 system_pods.go:61] "kindnet-9ml5n" [22d3f2b7-d65b-4c8e-a02f-58ead02d9794] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1010 18:22:15.216124  335513 system_pods.go:61] "kube-apiserver-newest-cni-121129" [c429c3d5-c663-453e-9d48-8eacc534ebf4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 18:22:15.216133  335513 system_pods.go:61] "kube-controller-manager-newest-cni-121129" [5e35e588-6a2a-414e-aea9-4d1d8b7897dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 18:22:15.216142  335513 system_pods.go:61] "kube-proxy-sw4cj" [82e9ec15-44c0-4bfd-8b16-3862f7bb01a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1010 18:22:15.216147  335513 system_pods.go:61] "kube-scheduler-newest-cni-121129" [55bc4998-af60-4c82-a3cc-18ccc57ede90] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 18:22:15.216160  335513 system_pods.go:61] "storage-provisioner" [c4cb75b4-5b40-4243-b3df-fd256cb036f9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1010 18:22:15.216168  335513 system_pods.go:74] duration metric: took 3.909666ms to wait for pod list to return data ...
	I1010 18:22:15.216178  335513 default_sa.go:34] waiting for default service account to be created ...
	I1010 18:22:15.218489  335513 default_sa.go:45] found service account: "default"
	I1010 18:22:15.218507  335513 default_sa.go:55] duration metric: took 2.324261ms for default service account to be created ...
	I1010 18:22:15.218517  335513 kubeadm.go:586] duration metric: took 2.68723566s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1010 18:22:15.218530  335513 node_conditions.go:102] verifying NodePressure condition ...
	I1010 18:22:15.220763  335513 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1010 18:22:15.220790  335513 node_conditions.go:123] node cpu capacity is 8
	I1010 18:22:15.220807  335513 node_conditions.go:105] duration metric: took 2.269966ms to run NodePressure ...
	I1010 18:22:15.220826  335513 start.go:241] waiting for startup goroutines ...
	I1010 18:22:15.220838  335513 start.go:246] waiting for cluster config update ...
	I1010 18:22:15.220851  335513 start.go:255] writing updated cluster config ...
	I1010 18:22:15.221177  335513 ssh_runner.go:195] Run: rm -f paused
	I1010 18:22:15.271095  335513 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1010 18:22:15.273221  335513 out.go:179] * Done! kubectl is now configured to use "newest-cni-121129" cluster and "default" namespace by default
	W1010 18:22:13.657932  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	W1010 18:22:15.658631  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.133089935Z" level=info msg="Running pod sandbox: kube-system/kindnet-9ml5n/POD" id=f84ff61b-3b3a-46b4-820c-de94daf6e2b3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.133179712Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.133753351Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.134384937Z" level=info msg="Ran pod sandbox ba0f1feaaaa83e75962ce8aed522e5db0e495f157bb3189dd739b5c2d05f42dc with infra container: kube-system/kube-proxy-sw4cj/POD" id=0b79577e-a196-4ab8-a5eb-38357f3c90a0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.135400396Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=81023e8b-c2ce-46b2-b792-b275e5cc4d22 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.136113128Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=f84ff61b-3b3a-46b4-820c-de94daf6e2b3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.136150063Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=fcd43586-197c-4397-83e6-ed710b88a411 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.137177773Z" level=info msg="Creating container: kube-system/kube-proxy-sw4cj/kube-proxy" id=857419a0-419a-41b9-897a-00ab96141de8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.137532887Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.137640463Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.138222615Z" level=info msg="Ran pod sandbox 9affa195e6a5e78bf758247bee73ba1280e90596f95d7a99c0e4175e8bec4f99 with infra container: kube-system/kindnet-9ml5n/POD" id=f84ff61b-3b3a-46b4-820c-de94daf6e2b3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.139161663Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=b50cb72a-cf44-4875-b53c-a9e638de52b9 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.140411234Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=f9384bf0-49c9-430b-b742-325c4ca11984 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.141434142Z" level=info msg="Creating container: kube-system/kindnet-9ml5n/kindnet-cni" id=04bcfb28-5a12-4e60-8ae9-8fda864bd6a3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.141830154Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.141944095Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.142261715Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.145024803Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.145605788Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.171447941Z" level=info msg="Created container 54fc410f4ca645d11135ea775c4765a38d35954eaee3c37750a64a1cf07f2ee7: kube-system/kindnet-9ml5n/kindnet-cni" id=04bcfb28-5a12-4e60-8ae9-8fda864bd6a3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.172103841Z" level=info msg="Starting container: 54fc410f4ca645d11135ea775c4765a38d35954eaee3c37750a64a1cf07f2ee7" id=bfd61f8f-1d36-4cea-9ff0-00577e8607eb name=/runtime.v1.RuntimeService/StartContainer
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.172881118Z" level=info msg="Created container c6fa99fb85d5287d72da51a3db60987a29f6798385a580361fb3660a26987be3: kube-system/kube-proxy-sw4cj/kube-proxy" id=857419a0-419a-41b9-897a-00ab96141de8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.173464522Z" level=info msg="Starting container: c6fa99fb85d5287d72da51a3db60987a29f6798385a580361fb3660a26987be3" id=ecacf92e-ebbc-4f1b-af26-3f820bb5d6d0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.174189355Z" level=info msg="Started container" PID=1054 containerID=54fc410f4ca645d11135ea775c4765a38d35954eaee3c37750a64a1cf07f2ee7 description=kube-system/kindnet-9ml5n/kindnet-cni id=bfd61f8f-1d36-4cea-9ff0-00577e8607eb name=/runtime.v1.RuntimeService/StartContainer sandboxID=9affa195e6a5e78bf758247bee73ba1280e90596f95d7a99c0e4175e8bec4f99
	Oct 10 18:22:15 newest-cni-121129 crio[527]: time="2025-10-10T18:22:15.176671811Z" level=info msg="Started container" PID=1055 containerID=c6fa99fb85d5287d72da51a3db60987a29f6798385a580361fb3660a26987be3 description=kube-system/kube-proxy-sw4cj/kube-proxy id=ecacf92e-ebbc-4f1b-af26-3f820bb5d6d0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ba0f1feaaaa83e75962ce8aed522e5db0e495f157bb3189dd739b5c2d05f42dc
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	54fc410f4ca64       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   5 seconds ago       Running             kindnet-cni               1                   9affa195e6a5e       kindnet-9ml5n                               kube-system
	c6fa99fb85d52       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   5 seconds ago       Running             kube-proxy                1                   ba0f1feaaaa83       kube-proxy-sw4cj                            kube-system
	7f03778cf9929       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   8 seconds ago       Running             etcd                      1                   6a0df69c3e1a3       etcd-newest-cni-121129                      kube-system
	bf112ce4d768b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   8 seconds ago       Running             kube-controller-manager   1                   72ec89047afac       kube-controller-manager-newest-cni-121129   kube-system
	ab61bce748bfc       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   8 seconds ago       Running             kube-scheduler            1                   16da27916f279       kube-scheduler-newest-cni-121129            kube-system
	ef0f16a1ff912       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   8 seconds ago       Running             kube-apiserver            1                   528a5198ca4aa       kube-apiserver-newest-cni-121129            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-121129
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-121129
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad692bf4ab89f0e135b80e730ae25010479ecc46
	                    minikube.k8s.io/name=newest-cni-121129
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_10T18_21_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 10 Oct 2025 18:21:51 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-121129
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 10 Oct 2025 18:22:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 10 Oct 2025 18:22:13 +0000   Fri, 10 Oct 2025 18:21:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 10 Oct 2025 18:22:13 +0000   Fri, 10 Oct 2025 18:21:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 10 Oct 2025 18:22:13 +0000   Fri, 10 Oct 2025 18:21:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 10 Oct 2025 18:22:13 +0000   Fri, 10 Oct 2025 18:21:49 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-121129
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 6694834041ede3e9eb1b67e168e90e0c
	  System UUID:                88739fa5-1b3b-4f5e-adda-c7c74720b2ef
	  Boot ID:                    830c8438-99e6-48ba-b543-66e651cad0c8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-121129                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         26s
	  kube-system                 kindnet-9ml5n                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      21s
	  kube-system                 kube-apiserver-newest-cni-121129             250m (3%)     0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-controller-manager-newest-cni-121129    200m (2%)     0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-proxy-sw4cj                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  kube-system                 kube-scheduler-newest-cni-121129             100m (1%)     0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 20s   kube-proxy       
	  Normal  Starting                 5s    kube-proxy       
	  Normal  Starting                 27s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s   kubelet          Node newest-cni-121129 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s   kubelet          Node newest-cni-121129 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s   kubelet          Node newest-cni-121129 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           22s   node-controller  Node newest-cni-121129 event: Registered Node newest-cni-121129 in Controller
	  Normal  RegisteredNode           3s    node-controller  Node newest-cni-121129 event: Registered Node newest-cni-121129 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 95 0c 3e 92 2e 08 06
	[  +0.052845] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 a5 06 76 2d e3 08 06
	[ +11.354316] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff fa c6 ff 04 55 d6 08 06
	[  +7.101927] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e6 9b 73 27 8c 80 08 06
	[  +0.000350] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 a5 06 76 2d e3 08 06
	[  +6.287191] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 27 2d 28 d6 46 08 06
	[  +0.000293] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa c6 ff 04 55 d6 08 06
	[Oct10 18:19] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 8c 22 f6 6b cf 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e 29 bf 13 20 f9 08 06
	[ +15.511156] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e d6 74 aa 27 d0 08 06
	[  +0.008495] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 af 05 d4 db d1 08 06
	[Oct10 18:20] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 0b 54 33 52 4e 08 06
	[  +0.000597] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 af 05 d4 db d1 08 06
	
	
	==> etcd [7f03778cf9929180d97b99c5a7dabc1b07cab95c28d238404c4b3cdda1350b21] <==
	{"level":"warn","ts":"2025-10-10T18:22:13.254893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.265190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.272013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.280042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.286382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.293960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.300488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.306801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.313246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.319626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.327162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.334531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.341941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.349137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.356484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.364261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.375141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.385270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.392767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.401047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.408816Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.423422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.431665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.438478Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:22:13.481623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50586","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:22:20 up  1:04,  0 user,  load average: 4.65, 4.58, 3.00
	Linux newest-cni-121129 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [54fc410f4ca645d11135ea775c4765a38d35954eaee3c37750a64a1cf07f2ee7] <==
	I1010 18:22:15.338713       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1010 18:22:15.338952       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1010 18:22:15.339145       1 main.go:148] setting mtu 1500 for CNI 
	I1010 18:22:15.339162       1 main.go:178] kindnetd IP family: "ipv4"
	I1010 18:22:15.339182       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-10T18:22:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1010 18:22:15.543117       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1010 18:22:15.543185       1 controller.go:381] "Waiting for informer caches to sync"
	I1010 18:22:15.543198       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1010 18:22:15.543443       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1010 18:22:15.943914       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1010 18:22:15.943945       1 metrics.go:72] Registering metrics
	I1010 18:22:15.944036       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [ef0f16a1ff912c99555175b679ca7c2499386f3f7c4b4c9a7270a180e8c15937] <==
	I1010 18:22:13.968307       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1010 18:22:13.968336       1 aggregator.go:171] initial CRD sync complete...
	I1010 18:22:13.968346       1 autoregister_controller.go:144] Starting autoregister controller
	I1010 18:22:13.968352       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1010 18:22:13.968357       1 cache.go:39] Caches are synced for autoregister controller
	I1010 18:22:13.967936       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1010 18:22:13.968505       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1010 18:22:13.968945       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1010 18:22:13.969038       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1010 18:22:13.973607       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1010 18:22:13.980381       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1010 18:22:13.997759       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1010 18:22:14.020035       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1010 18:22:14.207397       1 controller.go:667] quota admission added evaluator for: namespaces
	I1010 18:22:14.237762       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1010 18:22:14.257977       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1010 18:22:14.266612       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1010 18:22:14.273400       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1010 18:22:14.305483       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.206.163"}
	I1010 18:22:14.315876       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.250.116"}
	I1010 18:22:14.871371       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1010 18:22:17.545200       1 controller.go:667] quota admission added evaluator for: endpoints
	I1010 18:22:17.645389       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1010 18:22:17.694307       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1010 18:22:17.694307       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [bf112ce4d768b53d1e90f30761c3ce870d54e55a9c7241326c2c1e377046fb0b] <==
	I1010 18:22:17.252092       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1010 18:22:17.254918       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1010 18:22:17.257157       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1010 18:22:17.259437       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1010 18:22:17.260613       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1010 18:22:17.271799       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1010 18:22:17.271904       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1010 18:22:17.271959       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1010 18:22:17.271977       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1010 18:22:17.271984       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1010 18:22:17.272974       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1010 18:22:17.275274       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1010 18:22:17.291797       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1010 18:22:17.291823       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1010 18:22:17.291916       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1010 18:22:17.292985       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1010 18:22:17.293013       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1010 18:22:17.293030       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1010 18:22:17.293119       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1010 18:22:17.293150       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1010 18:22:17.293177       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1010 18:22:17.297735       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1010 18:22:17.298936       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1010 18:22:17.309110       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1010 18:22:17.314279       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [c6fa99fb85d5287d72da51a3db60987a29f6798385a580361fb3660a26987be3] <==
	I1010 18:22:15.210674       1 server_linux.go:53] "Using iptables proxy"
	I1010 18:22:15.279204       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1010 18:22:15.380169       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1010 18:22:15.380219       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1010 18:22:15.380314       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1010 18:22:15.400762       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1010 18:22:15.400823       1 server_linux.go:132] "Using iptables Proxier"
	I1010 18:22:15.405998       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1010 18:22:15.406404       1 server.go:527] "Version info" version="v1.34.1"
	I1010 18:22:15.406446       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 18:22:15.407795       1 config.go:106] "Starting endpoint slice config controller"
	I1010 18:22:15.407817       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1010 18:22:15.407886       1 config.go:200] "Starting service config controller"
	I1010 18:22:15.407907       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1010 18:22:15.407923       1 config.go:403] "Starting serviceCIDR config controller"
	I1010 18:22:15.407928       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1010 18:22:15.408025       1 config.go:309] "Starting node config controller"
	I1010 18:22:15.408033       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1010 18:22:15.408039       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1010 18:22:15.508662       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1010 18:22:15.508802       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1010 18:22:15.508797       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [ab61bce748bfcc69bd3fc766155054b877fa6b8c7695ee04c693a7820d3e6b33] <==
	I1010 18:22:13.128224       1 serving.go:386] Generated self-signed cert in-memory
	W1010 18:22:13.886343       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1010 18:22:13.886378       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1010 18:22:13.886390       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1010 18:22:13.886399       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1010 18:22:13.923149       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1010 18:22:13.923179       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 18:22:13.934839       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1010 18:22:13.934934       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1010 18:22:13.935019       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1010 18:22:13.935210       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1010 18:22:14.035837       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 10 18:22:13 newest-cni-121129 kubelet[679]: E1010 18:22:13.864312     679 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-121129\" not found" node="newest-cni-121129"
	Oct 10 18:22:13 newest-cni-121129 kubelet[679]: I1010 18:22:13.928752     679 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-121129"
	Oct 10 18:22:13 newest-cni-121129 kubelet[679]: E1010 18:22:13.942346     679 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-121129\" already exists" pod="kube-system/etcd-newest-cni-121129"
	Oct 10 18:22:13 newest-cni-121129 kubelet[679]: I1010 18:22:13.942390     679 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-121129"
	Oct 10 18:22:13 newest-cni-121129 kubelet[679]: E1010 18:22:13.948522     679 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-121129\" already exists" pod="kube-system/kube-apiserver-newest-cni-121129"
	Oct 10 18:22:13 newest-cni-121129 kubelet[679]: I1010 18:22:13.948552     679 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-121129"
	Oct 10 18:22:13 newest-cni-121129 kubelet[679]: E1010 18:22:13.954601     679 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-121129\" already exists" pod="kube-system/kube-controller-manager-newest-cni-121129"
	Oct 10 18:22:13 newest-cni-121129 kubelet[679]: I1010 18:22:13.954638     679 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-121129"
	Oct 10 18:22:13 newest-cni-121129 kubelet[679]: E1010 18:22:13.959151     679 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-121129\" already exists" pod="kube-system/kube-scheduler-newest-cni-121129"
	Oct 10 18:22:13 newest-cni-121129 kubelet[679]: I1010 18:22:13.992022     679 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-121129"
	Oct 10 18:22:13 newest-cni-121129 kubelet[679]: I1010 18:22:13.992144     679 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-121129"
	Oct 10 18:22:13 newest-cni-121129 kubelet[679]: I1010 18:22:13.992185     679 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 10 18:22:13 newest-cni-121129 kubelet[679]: I1010 18:22:13.993076     679 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 10 18:22:14 newest-cni-121129 kubelet[679]: I1010 18:22:14.823611     679 apiserver.go:52] "Watching apiserver"
	Oct 10 18:22:14 newest-cni-121129 kubelet[679]: I1010 18:22:14.828957     679 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 10 18:22:14 newest-cni-121129 kubelet[679]: I1010 18:22:14.865444     679 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-121129"
	Oct 10 18:22:14 newest-cni-121129 kubelet[679]: E1010 18:22:14.871774     679 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-121129\" already exists" pod="kube-system/kube-scheduler-newest-cni-121129"
	Oct 10 18:22:14 newest-cni-121129 kubelet[679]: I1010 18:22:14.901249     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22d3f2b7-d65b-4c8e-a02f-58ead02d9794-xtables-lock\") pod \"kindnet-9ml5n\" (UID: \"22d3f2b7-d65b-4c8e-a02f-58ead02d9794\") " pod="kube-system/kindnet-9ml5n"
	Oct 10 18:22:14 newest-cni-121129 kubelet[679]: I1010 18:22:14.901304     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22d3f2b7-d65b-4c8e-a02f-58ead02d9794-lib-modules\") pod \"kindnet-9ml5n\" (UID: \"22d3f2b7-d65b-4c8e-a02f-58ead02d9794\") " pod="kube-system/kindnet-9ml5n"
	Oct 10 18:22:14 newest-cni-121129 kubelet[679]: I1010 18:22:14.901330     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82e9ec15-44c0-4bfd-8b16-3862f7bb01a6-xtables-lock\") pod \"kube-proxy-sw4cj\" (UID: \"82e9ec15-44c0-4bfd-8b16-3862f7bb01a6\") " pod="kube-system/kube-proxy-sw4cj"
	Oct 10 18:22:14 newest-cni-121129 kubelet[679]: I1010 18:22:14.901355     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/22d3f2b7-d65b-4c8e-a02f-58ead02d9794-cni-cfg\") pod \"kindnet-9ml5n\" (UID: \"22d3f2b7-d65b-4c8e-a02f-58ead02d9794\") " pod="kube-system/kindnet-9ml5n"
	Oct 10 18:22:14 newest-cni-121129 kubelet[679]: I1010 18:22:14.901432     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82e9ec15-44c0-4bfd-8b16-3862f7bb01a6-lib-modules\") pod \"kube-proxy-sw4cj\" (UID: \"82e9ec15-44c0-4bfd-8b16-3862f7bb01a6\") " pod="kube-system/kube-proxy-sw4cj"
	Oct 10 18:22:16 newest-cni-121129 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 10 18:22:16 newest-cni-121129 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 10 18:22:16 newest-cni-121129 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-121129 -n newest-cni-121129
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-121129 -n newest-cni-121129: exit status 2 (311.622261ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-121129 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-bbxwj storage-provisioner dashboard-metrics-scraper-6ffb444bf9-fgsgr kubernetes-dashboard-855c9754f9-95jxx
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-121129 describe pod coredns-66bc5c9577-bbxwj storage-provisioner dashboard-metrics-scraper-6ffb444bf9-fgsgr kubernetes-dashboard-855c9754f9-95jxx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-121129 describe pod coredns-66bc5c9577-bbxwj storage-provisioner dashboard-metrics-scraper-6ffb444bf9-fgsgr kubernetes-dashboard-855c9754f9-95jxx: exit status 1 (60.749157ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-bbxwj" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-fgsgr" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-95jxx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-121129 describe pod coredns-66bc5c9577-bbxwj storage-provisioner dashboard-metrics-scraper-6ffb444bf9-fgsgr kubernetes-dashboard-855c9754f9-95jxx: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (5.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-821769 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-821769 --alsologtostderr -v=1: exit status 80 (2.118252873s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-821769 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 18:22:36.248366  339754 out.go:360] Setting OutFile to fd 1 ...
	I1010 18:22:36.248657  339754 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:22:36.248668  339754 out.go:374] Setting ErrFile to fd 2...
	I1010 18:22:36.248672  339754 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:22:36.248880  339754 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 18:22:36.249122  339754 out.go:368] Setting JSON to false
	I1010 18:22:36.249169  339754 mustload.go:65] Loading cluster: default-k8s-diff-port-821769
	I1010 18:22:36.249496  339754 config.go:182] Loaded profile config "default-k8s-diff-port-821769": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:22:36.249866  339754 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-821769 --format={{.State.Status}}
	I1010 18:22:36.268347  339754 host.go:66] Checking if "default-k8s-diff-port-821769" exists ...
	I1010 18:22:36.268606  339754 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:22:36.322129  339754 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-10 18:22:36.312376818 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:22:36.322710  339754 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-821769 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1010 18:22:36.324437  339754 out.go:179] * Pausing node default-k8s-diff-port-821769 ... 
	I1010 18:22:36.325410  339754 host.go:66] Checking if "default-k8s-diff-port-821769" exists ...
	I1010 18:22:36.325637  339754 ssh_runner.go:195] Run: systemctl --version
	I1010 18:22:36.325681  339754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-821769
	I1010 18:22:36.342503  339754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/default-k8s-diff-port-821769/id_rsa Username:docker}
	I1010 18:22:36.437662  339754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:22:36.450265  339754 pause.go:52] kubelet running: true
	I1010 18:22:36.450319  339754 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1010 18:22:36.612618  339754 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1010 18:22:36.612694  339754 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1010 18:22:36.678279  339754 cri.go:89] found id: "d3f4f58452fe6cad87bbeefc0306b06959951916beb1588e1284049f7b3f4f98"
	I1010 18:22:36.678303  339754 cri.go:89] found id: "d5dadd7d16f48731fcf9902bc7edb1c11a125db6a4169fdb24d901c1afb65224"
	I1010 18:22:36.678306  339754 cri.go:89] found id: "54882de88b25d351cee0feb4833af2c57b273bf1f3a3c88e36f676b1619686cb"
	I1010 18:22:36.678310  339754 cri.go:89] found id: "c70a052ca72d3fcf8221f750d50a9946693d80a13afebd88510aad7b927f385b"
	I1010 18:22:36.678312  339754 cri.go:89] found id: "6fc01004fca02171293288225d03c012204cdc683fe6069b66f91de42b957e10"
	I1010 18:22:36.678317  339754 cri.go:89] found id: "1352ca41b0e7626fbf6ee43638506dfab18bd157572e9128f411ac1c5ae54538"
	I1010 18:22:36.678319  339754 cri.go:89] found id: "2aeadcb9e03cc805af5eff4f1b521299f31e4d618387d10eef543b4e95787f70"
	I1010 18:22:36.678322  339754 cri.go:89] found id: "6c6e229b2a8311cf4d60aad6c602e02c2923b5ba2309e536076e40579456e8e2"
	I1010 18:22:36.678324  339754 cri.go:89] found id: "c3f03c923ad6830325d9888fdf2ad9de25ac73298e25b5812f72951d65af2eec"
	I1010 18:22:36.678338  339754 cri.go:89] found id: "0cc6c1dc24c033825917b1d32a4d483387a1fca52afc3cd70fc26507c34a82dd"
	I1010 18:22:36.678341  339754 cri.go:89] found id: "f1349f3edaedc69a7aa332fe3f3662c37e7ed235777aff61417cc51c8e32a81e"
	I1010 18:22:36.678343  339754 cri.go:89] found id: ""
	I1010 18:22:36.678388  339754 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 18:22:36.690711  339754 retry.go:31] will retry after 134.435825ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:22:36Z" level=error msg="open /run/runc: no such file or directory"
	I1010 18:22:36.826141  339754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:22:36.839433  339754 pause.go:52] kubelet running: false
	I1010 18:22:36.839490  339754 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1010 18:22:36.968910  339754 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1010 18:22:36.968997  339754 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1010 18:22:37.032495  339754 cri.go:89] found id: "d3f4f58452fe6cad87bbeefc0306b06959951916beb1588e1284049f7b3f4f98"
	I1010 18:22:37.032514  339754 cri.go:89] found id: "d5dadd7d16f48731fcf9902bc7edb1c11a125db6a4169fdb24d901c1afb65224"
	I1010 18:22:37.032518  339754 cri.go:89] found id: "54882de88b25d351cee0feb4833af2c57b273bf1f3a3c88e36f676b1619686cb"
	I1010 18:22:37.032521  339754 cri.go:89] found id: "c70a052ca72d3fcf8221f750d50a9946693d80a13afebd88510aad7b927f385b"
	I1010 18:22:37.032524  339754 cri.go:89] found id: "6fc01004fca02171293288225d03c012204cdc683fe6069b66f91de42b957e10"
	I1010 18:22:37.032527  339754 cri.go:89] found id: "1352ca41b0e7626fbf6ee43638506dfab18bd157572e9128f411ac1c5ae54538"
	I1010 18:22:37.032530  339754 cri.go:89] found id: "2aeadcb9e03cc805af5eff4f1b521299f31e4d618387d10eef543b4e95787f70"
	I1010 18:22:37.032532  339754 cri.go:89] found id: "6c6e229b2a8311cf4d60aad6c602e02c2923b5ba2309e536076e40579456e8e2"
	I1010 18:22:37.032535  339754 cri.go:89] found id: "c3f03c923ad6830325d9888fdf2ad9de25ac73298e25b5812f72951d65af2eec"
	I1010 18:22:37.032553  339754 cri.go:89] found id: "0cc6c1dc24c033825917b1d32a4d483387a1fca52afc3cd70fc26507c34a82dd"
	I1010 18:22:37.032559  339754 cri.go:89] found id: "f1349f3edaedc69a7aa332fe3f3662c37e7ed235777aff61417cc51c8e32a81e"
	I1010 18:22:37.032561  339754 cri.go:89] found id: ""
	I1010 18:22:37.032594  339754 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 18:22:37.044565  339754 retry.go:31] will retry after 428.238131ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:22:37Z" level=error msg="open /run/runc: no such file or directory"
	I1010 18:22:37.473170  339754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:22:37.496511  339754 pause.go:52] kubelet running: false
	I1010 18:22:37.496570  339754 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1010 18:22:37.627951  339754 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1010 18:22:37.628081  339754 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1010 18:22:37.690596  339754 cri.go:89] found id: "d3f4f58452fe6cad87bbeefc0306b06959951916beb1588e1284049f7b3f4f98"
	I1010 18:22:37.690615  339754 cri.go:89] found id: "d5dadd7d16f48731fcf9902bc7edb1c11a125db6a4169fdb24d901c1afb65224"
	I1010 18:22:37.690619  339754 cri.go:89] found id: "54882de88b25d351cee0feb4833af2c57b273bf1f3a3c88e36f676b1619686cb"
	I1010 18:22:37.690623  339754 cri.go:89] found id: "c70a052ca72d3fcf8221f750d50a9946693d80a13afebd88510aad7b927f385b"
	I1010 18:22:37.690625  339754 cri.go:89] found id: "6fc01004fca02171293288225d03c012204cdc683fe6069b66f91de42b957e10"
	I1010 18:22:37.690628  339754 cri.go:89] found id: "1352ca41b0e7626fbf6ee43638506dfab18bd157572e9128f411ac1c5ae54538"
	I1010 18:22:37.690631  339754 cri.go:89] found id: "2aeadcb9e03cc805af5eff4f1b521299f31e4d618387d10eef543b4e95787f70"
	I1010 18:22:37.690633  339754 cri.go:89] found id: "6c6e229b2a8311cf4d60aad6c602e02c2923b5ba2309e536076e40579456e8e2"
	I1010 18:22:37.690636  339754 cri.go:89] found id: "c3f03c923ad6830325d9888fdf2ad9de25ac73298e25b5812f72951d65af2eec"
	I1010 18:22:37.690641  339754 cri.go:89] found id: "0cc6c1dc24c033825917b1d32a4d483387a1fca52afc3cd70fc26507c34a82dd"
	I1010 18:22:37.690643  339754 cri.go:89] found id: "f1349f3edaedc69a7aa332fe3f3662c37e7ed235777aff61417cc51c8e32a81e"
	I1010 18:22:37.690646  339754 cri.go:89] found id: ""
	I1010 18:22:37.690680  339754 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 18:22:37.702821  339754 retry.go:31] will retry after 385.985134ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:22:37Z" level=error msg="open /run/runc: no such file or directory"
	I1010 18:22:38.089521  339754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:22:38.103069  339754 pause.go:52] kubelet running: false
	I1010 18:22:38.103133  339754 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1010 18:22:38.237022  339754 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1010 18:22:38.237105  339754 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1010 18:22:38.299409  339754 cri.go:89] found id: "d3f4f58452fe6cad87bbeefc0306b06959951916beb1588e1284049f7b3f4f98"
	I1010 18:22:38.299435  339754 cri.go:89] found id: "d5dadd7d16f48731fcf9902bc7edb1c11a125db6a4169fdb24d901c1afb65224"
	I1010 18:22:38.299441  339754 cri.go:89] found id: "54882de88b25d351cee0feb4833af2c57b273bf1f3a3c88e36f676b1619686cb"
	I1010 18:22:38.299447  339754 cri.go:89] found id: "c70a052ca72d3fcf8221f750d50a9946693d80a13afebd88510aad7b927f385b"
	I1010 18:22:38.299451  339754 cri.go:89] found id: "6fc01004fca02171293288225d03c012204cdc683fe6069b66f91de42b957e10"
	I1010 18:22:38.299456  339754 cri.go:89] found id: "1352ca41b0e7626fbf6ee43638506dfab18bd157572e9128f411ac1c5ae54538"
	I1010 18:22:38.299460  339754 cri.go:89] found id: "2aeadcb9e03cc805af5eff4f1b521299f31e4d618387d10eef543b4e95787f70"
	I1010 18:22:38.299463  339754 cri.go:89] found id: "6c6e229b2a8311cf4d60aad6c602e02c2923b5ba2309e536076e40579456e8e2"
	I1010 18:22:38.299465  339754 cri.go:89] found id: "c3f03c923ad6830325d9888fdf2ad9de25ac73298e25b5812f72951d65af2eec"
	I1010 18:22:38.299481  339754 cri.go:89] found id: "0cc6c1dc24c033825917b1d32a4d483387a1fca52afc3cd70fc26507c34a82dd"
	I1010 18:22:38.299484  339754 cri.go:89] found id: "f1349f3edaedc69a7aa332fe3f3662c37e7ed235777aff61417cc51c8e32a81e"
	I1010 18:22:38.299486  339754 cri.go:89] found id: ""
	I1010 18:22:38.299523  339754 ssh_runner.go:195] Run: sudo runc list -f json
	I1010 18:22:38.313506  339754 out.go:203] 
	W1010 18:22:38.314551  339754 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:22:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:22:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1010 18:22:38.314578  339754 out.go:285] * 
	* 
	W1010 18:22:38.318891  339754 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1010 18:22:38.319952  339754 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-821769 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-821769
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-821769:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "92545ee0c99825b76f4f4b9fc8a4b4ba2aa46e2125731312ddee69c03ebd0166",
	        "Created": "2025-10-10T18:20:31.085915858Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 326126,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-10T18:21:36.695322482Z",
	            "FinishedAt": "2025-10-10T18:21:33.527040287Z"
	        },
	        "Image": "sha256:84da1fc78d37190122f56c520913b0bfc454516bc5fdbdc209e2a5258afce8c3",
	        "ResolvConfPath": "/var/lib/docker/containers/92545ee0c99825b76f4f4b9fc8a4b4ba2aa46e2125731312ddee69c03ebd0166/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/92545ee0c99825b76f4f4b9fc8a4b4ba2aa46e2125731312ddee69c03ebd0166/hostname",
	        "HostsPath": "/var/lib/docker/containers/92545ee0c99825b76f4f4b9fc8a4b4ba2aa46e2125731312ddee69c03ebd0166/hosts",
	        "LogPath": "/var/lib/docker/containers/92545ee0c99825b76f4f4b9fc8a4b4ba2aa46e2125731312ddee69c03ebd0166/92545ee0c99825b76f4f4b9fc8a4b4ba2aa46e2125731312ddee69c03ebd0166-json.log",
	        "Name": "/default-k8s-diff-port-821769",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-821769:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-821769",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "92545ee0c99825b76f4f4b9fc8a4b4ba2aa46e2125731312ddee69c03ebd0166",
	                "LowerDir": "/var/lib/docker/overlay2/66bee21f5730501e8e73927b89befd253dad8df05381d41144b0a046ca5a7701-init/diff:/var/lib/docker/overlay2/9995a0af7efc4d83e8e62526a6cf13ffc5df3bab5cee59077c863040f7e3e58d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/66bee21f5730501e8e73927b89befd253dad8df05381d41144b0a046ca5a7701/merged",
	                "UpperDir": "/var/lib/docker/overlay2/66bee21f5730501e8e73927b89befd253dad8df05381d41144b0a046ca5a7701/diff",
	                "WorkDir": "/var/lib/docker/overlay2/66bee21f5730501e8e73927b89befd253dad8df05381d41144b0a046ca5a7701/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-821769",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-821769/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-821769",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-821769",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-821769",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3fded747e14a091ca7b858217b8414eaed4e157cb33c014d3875c0347f3c69f4",
	            "SandboxKey": "/var/run/docker/netns/3fded747e14a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-821769": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:f2:bd:a3:5d:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "24e5e8e22680fb22d88f869caaf5ecac6707c168b04786cc68232728a1674899",
	                    "EndpointID": "8ddb1e07101856a2b97cabe4d9871c8d8d7f8ee5cef61642d45268322c57364a",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-821769",
	                        "92545ee0c998"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-821769 -n default-k8s-diff-port-821769
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-821769 -n default-k8s-diff-port-821769: exit status 2 (299.05282ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-821769 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-821769 logs -n 25: (1.027512674s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ old-k8s-version-141193 image list --format=json                                                                                                                                                                                               │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ pause   │ -p old-k8s-version-141193 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ delete  │ -p old-k8s-version-141193                                                                                                                                                                                                                     │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ delete  │ -p old-k8s-version-141193                                                                                                                                                                                                                     │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ start   │ -p newest-cni-121129 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-821769 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ start   │ -p default-k8s-diff-port-821769 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:22 UTC │
	│ image   │ no-preload-556024 image list --format=json                                                                                                                                                                                                    │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ pause   │ -p no-preload-556024 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ image   │ embed-certs-472518 image list --format=json                                                                                                                                                                                                   │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ pause   │ -p embed-certs-472518 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ delete  │ -p no-preload-556024                                                                                                                                                                                                                          │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ delete  │ -p no-preload-556024                                                                                                                                                                                                                          │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ delete  │ -p embed-certs-472518                                                                                                                                                                                                                         │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ delete  │ -p embed-certs-472518                                                                                                                                                                                                                         │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ addons  │ enable metrics-server -p newest-cni-121129 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:22 UTC │                     │
	│ stop    │ -p newest-cni-121129 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:22 UTC │ 10 Oct 25 18:22 UTC │
	│ addons  │ enable dashboard -p newest-cni-121129 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:22 UTC │ 10 Oct 25 18:22 UTC │
	│ start   │ -p newest-cni-121129 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:22 UTC │ 10 Oct 25 18:22 UTC │
	│ image   │ newest-cni-121129 image list --format=json                                                                                                                                                                                                    │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:22 UTC │ 10 Oct 25 18:22 UTC │
	│ pause   │ -p newest-cni-121129 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:22 UTC │                     │
	│ delete  │ -p newest-cni-121129                                                                                                                                                                                                                          │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:22 UTC │ 10 Oct 25 18:22 UTC │
	│ delete  │ -p newest-cni-121129                                                                                                                                                                                                                          │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:22 UTC │ 10 Oct 25 18:22 UTC │
	│ image   │ default-k8s-diff-port-821769 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:22 UTC │ 10 Oct 25 18:22 UTC │
	│ pause   │ -p default-k8s-diff-port-821769 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/10 18:22:05
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 18:22:05.290569  335513 out.go:360] Setting OutFile to fd 1 ...
	I1010 18:22:05.290861  335513 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:22:05.290870  335513 out.go:374] Setting ErrFile to fd 2...
	I1010 18:22:05.290877  335513 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:22:05.291147  335513 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 18:22:05.291697  335513 out.go:368] Setting JSON to false
	I1010 18:22:05.292906  335513 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3865,"bootTime":1760116660,"procs":265,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 18:22:05.293008  335513 start.go:141] virtualization: kvm guest
	I1010 18:22:05.294961  335513 out.go:179] * [newest-cni-121129] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1010 18:22:05.296259  335513 out.go:179]   - MINIKUBE_LOCATION=21724
	I1010 18:22:05.296288  335513 notify.go:220] Checking for updates...
	I1010 18:22:05.298639  335513 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 18:22:05.299676  335513 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:22:05.300690  335513 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-5815/.minikube
	I1010 18:22:05.301797  335513 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 18:22:05.302929  335513 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 18:22:05.307755  335513 config.go:182] Loaded profile config "newest-cni-121129": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:22:05.308318  335513 driver.go:421] Setting default libvirt URI to qemu:///system
	I1010 18:22:05.332954  335513 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1010 18:22:05.333071  335513 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:22:05.393251  335513 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-10 18:22:05.383186457 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:22:05.393422  335513 docker.go:318] overlay module found
	I1010 18:22:05.395166  335513 out.go:179] * Using the docker driver based on existing profile
	I1010 18:22:05.396306  335513 start.go:305] selected driver: docker
	I1010 18:22:05.396321  335513 start.go:925] validating driver "docker" against &{Name:newest-cni-121129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:22:05.396438  335513 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 18:22:05.397122  335513 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:22:05.458840  335513 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-10 18:22:05.448230468 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:22:05.459176  335513 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1010 18:22:05.459216  335513 cni.go:84] Creating CNI manager for ""
	I1010 18:22:05.459260  335513 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:22:05.459302  335513 start.go:349] cluster config:
	{Name:newest-cni-121129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:22:05.461891  335513 out.go:179] * Starting "newest-cni-121129" primary control-plane node in "newest-cni-121129" cluster
	I1010 18:22:05.462953  335513 cache.go:123] Beginning downloading kic base image for docker with crio
	I1010 18:22:05.464080  335513 out.go:179] * Pulling base image v0.0.48-1760103811-21724 ...
	I1010 18:22:05.465182  335513 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:22:05.465219  335513 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1010 18:22:05.465239  335513 cache.go:58] Caching tarball of preloaded images
	I1010 18:22:05.465271  335513 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon
	I1010 18:22:05.465353  335513 preload.go:233] Found /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 18:22:05.465368  335513 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1010 18:22:05.465464  335513 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/config.json ...
	I1010 18:22:05.486563  335513 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon, skipping pull
	I1010 18:22:05.486586  335513 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 exists in daemon, skipping load
	I1010 18:22:05.486605  335513 cache.go:232] Successfully downloaded all kic artifacts
	I1010 18:22:05.486632  335513 start.go:360] acquireMachinesLock for newest-cni-121129: {Name:mkd067d67013b78a79cc31e2d50fcfd69790fc6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:22:05.486702  335513 start.go:364] duration metric: took 48.282µs to acquireMachinesLock for "newest-cni-121129"
	I1010 18:22:05.486725  335513 start.go:96] Skipping create...Using existing machine configuration
	I1010 18:22:05.486733  335513 fix.go:54] fixHost starting: 
	I1010 18:22:05.486937  335513 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:22:05.505160  335513 fix.go:112] recreateIfNeeded on newest-cni-121129: state=Stopped err=<nil>
	W1010 18:22:05.505189  335513 fix.go:138] unexpected machine state, will restart: <nil>
	W1010 18:22:02.659629  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	W1010 18:22:04.660067  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	I1010 18:22:05.506971  335513 out.go:252] * Restarting existing docker container for "newest-cni-121129" ...
	I1010 18:22:05.507082  335513 cli_runner.go:164] Run: docker start newest-cni-121129
	I1010 18:22:05.744340  335513 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:22:05.763128  335513 kic.go:430] container "newest-cni-121129" state is running.
	I1010 18:22:05.763484  335513 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-121129
	I1010 18:22:05.782418  335513 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/config.json ...
	I1010 18:22:05.782704  335513 machine.go:93] provisionDockerMachine start ...
	I1010 18:22:05.782787  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:05.801168  335513 main.go:141] libmachine: Using SSH client type: native
	I1010 18:22:05.801379  335513 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1010 18:22:05.801392  335513 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 18:22:05.802022  335513 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53092->127.0.0.1:33133: read: connection reset by peer
	I1010 18:22:08.938319  335513 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-121129
	
	I1010 18:22:08.938347  335513 ubuntu.go:182] provisioning hostname "newest-cni-121129"
	I1010 18:22:08.938432  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:08.956809  335513 main.go:141] libmachine: Using SSH client type: native
	I1010 18:22:08.957009  335513 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1010 18:22:08.957024  335513 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-121129 && echo "newest-cni-121129" | sudo tee /etc/hostname
	I1010 18:22:09.102637  335513 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-121129
	
	I1010 18:22:09.102708  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:09.121495  335513 main.go:141] libmachine: Using SSH client type: native
	I1010 18:22:09.121708  335513 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1010 18:22:09.121725  335513 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-121129' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-121129/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-121129' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:22:09.255802  335513 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:22:09.255838  335513 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-5815/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-5815/.minikube}
	I1010 18:22:09.255877  335513 ubuntu.go:190] setting up certificates
	I1010 18:22:09.255893  335513 provision.go:84] configureAuth start
	I1010 18:22:09.255959  335513 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-121129
	I1010 18:22:09.273223  335513 provision.go:143] copyHostCerts
	I1010 18:22:09.273280  335513 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem, removing ...
	I1010 18:22:09.273293  335513 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem
	I1010 18:22:09.273359  335513 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem (1082 bytes)
	I1010 18:22:09.273459  335513 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem, removing ...
	I1010 18:22:09.273468  335513 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem
	I1010 18:22:09.273494  335513 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem (1123 bytes)
	I1010 18:22:09.273561  335513 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem, removing ...
	I1010 18:22:09.273568  335513 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem
	I1010 18:22:09.273591  335513 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem (1675 bytes)
	I1010 18:22:09.273652  335513 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem org=jenkins.newest-cni-121129 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-121129]
	I1010 18:22:09.612120  335513 provision.go:177] copyRemoteCerts
	I1010 18:22:09.612187  335513 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:22:09.612221  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:09.629812  335513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:22:09.726962  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1010 18:22:09.746555  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1010 18:22:09.766845  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 18:22:09.786986  335513 provision.go:87] duration metric: took 531.066176ms to configureAuth
	I1010 18:22:09.787015  335513 ubuntu.go:206] setting minikube options for container-runtime
	I1010 18:22:09.787209  335513 config.go:182] Loaded profile config "newest-cni-121129": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:22:09.787337  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:09.805200  335513 main.go:141] libmachine: Using SSH client type: native
	I1010 18:22:09.805389  335513 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1010 18:22:09.805406  335513 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:22:10.098222  335513 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:22:10.098249  335513 machine.go:96] duration metric: took 4.31552528s to provisionDockerMachine
	I1010 18:22:10.098261  335513 start.go:293] postStartSetup for "newest-cni-121129" (driver="docker")
	I1010 18:22:10.098276  335513 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:22:10.098357  335513 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:22:10.098407  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:10.115908  335513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:22:10.213790  335513 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:22:10.217524  335513 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1010 18:22:10.217553  335513 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1010 18:22:10.217567  335513 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/addons for local assets ...
	I1010 18:22:10.217636  335513 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/files for local assets ...
	I1010 18:22:10.217740  335513 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem -> 93542.pem in /etc/ssl/certs
	I1010 18:22:10.217864  335513 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:22:10.226684  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:22:10.248076  335513 start.go:296] duration metric: took 149.799111ms for postStartSetup
	I1010 18:22:10.248178  335513 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1010 18:22:10.248226  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:10.266300  335513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:22:10.360213  335513 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1010 18:22:10.364797  335513 fix.go:56] duration metric: took 4.878059137s for fixHost
	I1010 18:22:10.364821  335513 start.go:83] releasing machines lock for "newest-cni-121129", held for 4.878105914s
	I1010 18:22:10.364878  335513 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-121129
	I1010 18:22:10.383110  335513 ssh_runner.go:195] Run: cat /version.json
	I1010 18:22:10.383169  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:10.383208  335513 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:22:10.383290  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:10.401694  335513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:22:10.402069  335513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:22:10.549004  335513 ssh_runner.go:195] Run: systemctl --version
	I1010 18:22:10.555440  335513 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:22:10.589903  335513 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:22:10.594487  335513 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:22:10.594552  335513 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:22:10.603402  335513 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1010 18:22:10.603427  335513 start.go:495] detecting cgroup driver to use...
	I1010 18:22:10.603462  335513 detect.go:190] detected "systemd" cgroup driver on host os
	I1010 18:22:10.603516  335513 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:22:10.617988  335513 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:22:10.630757  335513 docker.go:218] disabling cri-docker service (if available) ...
	I1010 18:22:10.630811  335513 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:22:10.645086  335513 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:22:10.659116  335513 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:22:10.739783  335513 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:22:10.821838  335513 docker.go:234] disabling docker service ...
	I1010 18:22:10.821898  335513 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:22:10.836530  335513 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:22:10.849438  335513 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:22:10.926810  335513 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:22:11.009431  335513 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:22:11.022257  335513 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:22:11.037720  335513 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1010 18:22:11.037792  335513 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:22:11.047811  335513 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1010 18:22:11.047875  335513 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:22:11.057692  335513 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:22:11.067884  335513 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:22:11.077525  335513 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:22:11.087002  335513 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:22:11.096828  335513 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:22:11.106020  335513 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:22:11.115595  335513 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:22:11.123688  335513 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:22:11.131885  335513 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:22:11.209940  335513 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:22:11.350816  335513 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:22:11.350877  335513 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:22:11.355096  335513 start.go:563] Will wait 60s for crictl version
	I1010 18:22:11.355145  335513 ssh_runner.go:195] Run: which crictl
	I1010 18:22:11.358770  335513 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1010 18:22:11.384561  335513 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1010 18:22:11.384639  335513 ssh_runner.go:195] Run: crio --version
	I1010 18:22:11.411320  335513 ssh_runner.go:195] Run: crio --version
	I1010 18:22:11.440045  335513 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1010 18:22:06.661425  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	W1010 18:22:09.158000  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	W1010 18:22:11.160422  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	I1010 18:22:11.441103  335513 cli_runner.go:164] Run: docker network inspect newest-cni-121129 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 18:22:11.458538  335513 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1010 18:22:11.462704  335513 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:22:11.475134  335513 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1010 18:22:11.476017  335513 kubeadm.go:883] updating cluster {Name:newest-cni-121129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 18:22:11.476150  335513 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:22:11.476202  335513 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:22:11.507304  335513 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:22:11.507323  335513 crio.go:433] Images already preloaded, skipping extraction
	I1010 18:22:11.507363  335513 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:22:11.533243  335513 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:22:11.533265  335513 cache_images.go:85] Images are preloaded, skipping loading
	I1010 18:22:11.533272  335513 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1010 18:22:11.533353  335513 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-121129 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:22:11.533416  335513 ssh_runner.go:195] Run: crio config
	I1010 18:22:11.578761  335513 cni.go:84] Creating CNI manager for ""
	I1010 18:22:11.578789  335513 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:22:11.578804  335513 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1010 18:22:11.578824  335513 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-121129 NodeName:newest-cni-121129 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 18:22:11.578929  335513 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-121129"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 18:22:11.578984  335513 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1010 18:22:11.587839  335513 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 18:22:11.587894  335513 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 18:22:11.596414  335513 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1010 18:22:11.610238  335513 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:22:11.623960  335513 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1010 18:22:11.637763  335513 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1010 18:22:11.641378  335513 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:22:11.652228  335513 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:22:11.733285  335513 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:22:11.757177  335513 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129 for IP: 192.168.85.2
	I1010 18:22:11.757199  335513 certs.go:195] generating shared ca certs ...
	I1010 18:22:11.757219  335513 certs.go:227] acquiring lock for ca certs: {Name:mkd2ebf34e0d6ec3a7809bed8325fdc7fe2fcc31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:22:11.757370  335513 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key
	I1010 18:22:11.757429  335513 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key
	I1010 18:22:11.757441  335513 certs.go:257] generating profile certs ...
	I1010 18:22:11.757572  335513 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.key
	I1010 18:22:11.757653  335513 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key.89f266b7
	I1010 18:22:11.757703  335513 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.key
	I1010 18:22:11.757835  335513 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem (1338 bytes)
	W1010 18:22:11.757872  335513 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354_empty.pem, impossibly tiny 0 bytes
	I1010 18:22:11.757885  335513 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem (1675 bytes)
	I1010 18:22:11.757915  335513 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem (1082 bytes)
	I1010 18:22:11.757954  335513 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:22:11.757981  335513 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem (1675 bytes)
	I1010 18:22:11.758033  335513 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:22:11.758775  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:22:11.778857  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:22:11.801208  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:22:11.824760  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1010 18:22:11.851378  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1010 18:22:11.870850  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 18:22:11.889951  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:22:11.908879  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 18:22:11.928343  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /usr/share/ca-certificates/93542.pem (1708 bytes)
	I1010 18:22:11.948375  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:22:11.969276  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem --> /usr/share/ca-certificates/9354.pem (1338 bytes)
	I1010 18:22:11.988998  335513 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 18:22:12.003247  335513 ssh_runner.go:195] Run: openssl version
	I1010 18:22:12.009554  335513 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:22:12.018800  335513 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:22:12.022724  335513 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:22:12.022777  335513 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:22:12.057604  335513 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:22:12.067287  335513 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9354.pem && ln -fs /usr/share/ca-certificates/9354.pem /etc/ssl/certs/9354.pem"
	I1010 18:22:12.076762  335513 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9354.pem
	I1010 18:22:12.080550  335513 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 17:36 /usr/share/ca-certificates/9354.pem
	I1010 18:22:12.080594  335513 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9354.pem
	I1010 18:22:12.114518  335513 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9354.pem /etc/ssl/certs/51391683.0"
	I1010 18:22:12.123583  335513 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93542.pem && ln -fs /usr/share/ca-certificates/93542.pem /etc/ssl/certs/93542.pem"
	I1010 18:22:12.132960  335513 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93542.pem
	I1010 18:22:12.137033  335513 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 17:36 /usr/share/ca-certificates/93542.pem
	I1010 18:22:12.137103  335513 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93542.pem
	I1010 18:22:12.172587  335513 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93542.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:22:12.181976  335513 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:22:12.185849  335513 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 18:22:12.220072  335513 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 18:22:12.255822  335513 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 18:22:12.300141  335513 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 18:22:12.343441  335513 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 18:22:12.393734  335513 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 18:22:12.454003  335513 kubeadm.go:400] StartCluster: {Name:newest-cni-121129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:22:12.454110  335513 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 18:22:12.454196  335513 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 18:22:12.487305  335513 cri.go:89] found id: "7f03778cf9929180d97b99c5a7dabc1b07cab95c28d238404c4b3cdda1350b21"
	I1010 18:22:12.487332  335513 cri.go:89] found id: "bf112ce4d768b53d1e90f30761c3ce870d54e55a9c7241326c2c1e377046fb0b"
	I1010 18:22:12.487338  335513 cri.go:89] found id: "ab61bce748bfcc69bd3fc766155054b877fa6b8c7695ee04c693a7820d3e6b33"
	I1010 18:22:12.487343  335513 cri.go:89] found id: "ef0f16a1ff912c99555175b679ca7c2499386f3f7c4b4c9a7270a180e8c15937"
	I1010 18:22:12.487347  335513 cri.go:89] found id: ""
	I1010 18:22:12.487394  335513 ssh_runner.go:195] Run: sudo runc list -f json
	W1010 18:22:12.500489  335513 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:22:12Z" level=error msg="open /run/runc: no such file or directory"
	I1010 18:22:12.500556  335513 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 18:22:12.509425  335513 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1010 18:22:12.509447  335513 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1010 18:22:12.509493  335513 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 18:22:12.518026  335513 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 18:22:12.518736  335513 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-121129" does not appear in /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:22:12.519045  335513 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-5815/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-121129" cluster setting kubeconfig missing "newest-cni-121129" context setting]
	I1010 18:22:12.519593  335513 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:22:12.520854  335513 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 18:22:12.530013  335513 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1010 18:22:12.530074  335513 kubeadm.go:601] duration metric: took 20.594831ms to restartPrimaryControlPlane
	I1010 18:22:12.530092  335513 kubeadm.go:402] duration metric: took 76.095724ms to StartCluster
	I1010 18:22:12.530115  335513 settings.go:142] acquiring lock: {Name:mk32701f7c6313a55b8740f0862889585a36e8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:22:12.530186  335513 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:22:12.530994  335513 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:22:12.531256  335513 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:22:12.531320  335513 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 18:22:12.531440  335513 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-121129"
	I1010 18:22:12.531451  335513 addons.go:69] Setting dashboard=true in profile "newest-cni-121129"
	I1010 18:22:12.531464  335513 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-121129"
	I1010 18:22:12.531470  335513 addons.go:238] Setting addon dashboard=true in "newest-cni-121129"
	W1010 18:22:12.531473  335513 addons.go:247] addon storage-provisioner should already be in state true
	W1010 18:22:12.531478  335513 addons.go:247] addon dashboard should already be in state true
	I1010 18:22:12.531482  335513 config.go:182] Loaded profile config "newest-cni-121129": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:22:12.531493  335513 addons.go:69] Setting default-storageclass=true in profile "newest-cni-121129"
	I1010 18:22:12.531516  335513 host.go:66] Checking if "newest-cni-121129" exists ...
	I1010 18:22:12.531531  335513 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-121129"
	I1010 18:22:12.531504  335513 host.go:66] Checking if "newest-cni-121129" exists ...
	I1010 18:22:12.531869  335513 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:22:12.532071  335513 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:22:12.532071  335513 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:22:12.535048  335513 out.go:179] * Verifying Kubernetes components...
	I1010 18:22:12.536132  335513 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:22:12.558523  335513 addons.go:238] Setting addon default-storageclass=true in "newest-cni-121129"
	W1010 18:22:12.558549  335513 addons.go:247] addon default-storageclass should already be in state true
	I1010 18:22:12.558578  335513 host.go:66] Checking if "newest-cni-121129" exists ...
	I1010 18:22:12.558631  335513 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1010 18:22:12.559047  335513 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:22:12.559747  335513 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:22:12.560780  335513 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1010 18:22:12.560840  335513 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:22:12.560860  335513 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 18:22:12.560910  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:12.565594  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1010 18:22:12.565614  335513 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1010 18:22:12.565676  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:12.591733  335513 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 18:22:12.591757  335513 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 18:22:12.591901  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:12.596614  335513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:22:12.597384  335513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:22:12.615727  335513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:22:12.677916  335513 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:22:12.692091  335513 api_server.go:52] waiting for apiserver process to appear ...
	I1010 18:22:12.692167  335513 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 18:22:12.705996  335513 api_server.go:72] duration metric: took 174.708821ms to wait for apiserver process to appear ...
	I1010 18:22:12.706031  335513 api_server.go:88] waiting for apiserver healthz status ...
	I1010 18:22:12.706074  335513 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1010 18:22:12.762093  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1010 18:22:12.762118  335513 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1010 18:22:12.763137  335513 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:22:12.775071  335513 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 18:22:12.780905  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1010 18:22:12.780927  335513 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1010 18:22:12.802455  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1010 18:22:12.802487  335513 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1010 18:22:12.823607  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1010 18:22:12.823636  335513 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1010 18:22:12.839919  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1010 18:22:12.839944  335513 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1010 18:22:12.856483  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1010 18:22:12.856511  335513 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1010 18:22:12.873148  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1010 18:22:12.873175  335513 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1010 18:22:12.888146  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1010 18:22:12.888174  335513 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1010 18:22:12.903848  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1010 18:22:12.903872  335513 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1010 18:22:12.922065  335513 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1010 18:22:13.877182  335513 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 18:22:13.877224  335513 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 18:22:13.877242  335513 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1010 18:22:13.912048  335513 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 18:22:13.912094  335513 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 18:22:14.206894  335513 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1010 18:22:14.212024  335513 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 18:22:14.212069  335513 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 18:22:14.404024  335513 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.640853795s)
	I1010 18:22:14.404112  335513 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.629010505s)
	I1010 18:22:14.404217  335513 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.482117938s)
	I1010 18:22:14.406078  335513 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-121129 addons enable metrics-server
	
	I1010 18:22:14.415455  335513 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1010 18:22:14.416687  335513 addons.go:514] duration metric: took 1.885368042s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1010 18:22:14.706642  335513 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1010 18:22:14.710574  335513 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 18:22:14.710598  335513 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 18:22:15.206122  335513 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1010 18:22:15.211233  335513 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1010 18:22:15.212217  335513 api_server.go:141] control plane version: v1.34.1
	I1010 18:22:15.212245  335513 api_server.go:131] duration metric: took 2.506207886s to wait for apiserver health ...
	I1010 18:22:15.212254  335513 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 18:22:15.216033  335513 system_pods.go:59] 8 kube-system pods found
	I1010 18:22:15.216081  335513 system_pods.go:61] "coredns-66bc5c9577-bbxwj" [54b0d9c6-555f-476b-90d2-aca531478020] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1010 18:22:15.216094  335513 system_pods.go:61] "etcd-newest-cni-121129" [24b69503-efe0-4418-b656-58b90f7d7420] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 18:22:15.216110  335513 system_pods.go:61] "kindnet-9ml5n" [22d3f2b7-d65b-4c8e-a02f-58ead02d9794] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1010 18:22:15.216124  335513 system_pods.go:61] "kube-apiserver-newest-cni-121129" [c429c3d5-c663-453e-9d48-8eacc534ebf4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 18:22:15.216133  335513 system_pods.go:61] "kube-controller-manager-newest-cni-121129" [5e35e588-6a2a-414e-aea9-4d1d8b7897dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 18:22:15.216142  335513 system_pods.go:61] "kube-proxy-sw4cj" [82e9ec15-44c0-4bfd-8b16-3862f7bb01a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1010 18:22:15.216147  335513 system_pods.go:61] "kube-scheduler-newest-cni-121129" [55bc4998-af60-4c82-a3cc-18ccc57ede90] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 18:22:15.216160  335513 system_pods.go:61] "storage-provisioner" [c4cb75b4-5b40-4243-b3df-fd256cb036f9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1010 18:22:15.216168  335513 system_pods.go:74] duration metric: took 3.909666ms to wait for pod list to return data ...
	I1010 18:22:15.216178  335513 default_sa.go:34] waiting for default service account to be created ...
	I1010 18:22:15.218489  335513 default_sa.go:45] found service account: "default"
	I1010 18:22:15.218507  335513 default_sa.go:55] duration metric: took 2.324261ms for default service account to be created ...
	I1010 18:22:15.218517  335513 kubeadm.go:586] duration metric: took 2.68723566s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1010 18:22:15.218530  335513 node_conditions.go:102] verifying NodePressure condition ...
	I1010 18:22:15.220763  335513 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1010 18:22:15.220790  335513 node_conditions.go:123] node cpu capacity is 8
	I1010 18:22:15.220807  335513 node_conditions.go:105] duration metric: took 2.269966ms to run NodePressure ...
	I1010 18:22:15.220826  335513 start.go:241] waiting for startup goroutines ...
	I1010 18:22:15.220838  335513 start.go:246] waiting for cluster config update ...
	I1010 18:22:15.220851  335513 start.go:255] writing updated cluster config ...
	I1010 18:22:15.221177  335513 ssh_runner.go:195] Run: rm -f paused
	I1010 18:22:15.271095  335513 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1010 18:22:15.273221  335513 out.go:179] * Done! kubectl is now configured to use "newest-cni-121129" cluster and "default" namespace by default
	W1010 18:22:13.657932  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	W1010 18:22:15.658631  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	W1010 18:22:17.658663  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	W1010 18:22:20.158567  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	W1010 18:22:22.657896  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	I1010 18:22:23.158303  325699 pod_ready.go:94] pod "coredns-66bc5c9577-wrz5v" is "Ready"
	I1010 18:22:23.158329  325699 pod_ready.go:86] duration metric: took 36.005628615s for pod "coredns-66bc5c9577-wrz5v" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:22:23.160886  325699 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:22:23.165078  325699 pod_ready.go:94] pod "etcd-default-k8s-diff-port-821769" is "Ready"
	I1010 18:22:23.165100  325699 pod_ready.go:86] duration metric: took 4.192959ms for pod "etcd-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:22:23.167146  325699 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:22:23.170964  325699 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-821769" is "Ready"
	I1010 18:22:23.170991  325699 pod_ready.go:86] duration metric: took 3.822481ms for pod "kube-apiserver-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:22:23.172939  325699 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:22:23.356997  325699 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-821769" is "Ready"
	I1010 18:22:23.357025  325699 pod_ready.go:86] duration metric: took 184.065616ms for pod "kube-controller-manager-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:22:23.556347  325699 pod_ready.go:83] waiting for pod "kube-proxy-h2mzf" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:22:23.956188  325699 pod_ready.go:94] pod "kube-proxy-h2mzf" is "Ready"
	I1010 18:22:23.956226  325699 pod_ready.go:86] duration metric: took 399.852289ms for pod "kube-proxy-h2mzf" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:22:24.156235  325699 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:22:24.556503  325699 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-821769" is "Ready"
	I1010 18:22:24.556533  325699 pod_ready.go:86] duration metric: took 400.272866ms for pod "kube-scheduler-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:22:24.556547  325699 pod_ready.go:40] duration metric: took 37.408032694s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 18:22:24.600620  325699 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1010 18:22:24.602340  325699 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-821769" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 10 18:21:58 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:21:58.067778718Z" level=info msg="Started container" PID=1751 containerID=094dac5fc92f16e24f4a7aaa85692acb7aedde698c0d2659f5ef9db3424dff2d description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrzb2/dashboard-metrics-scraper id=50825bda-284a-4f80-9dd0-43fa94aa3fec name=/runtime.v1.RuntimeService/StartContainer sandboxID=701908876e25dff2aea789d1c3a91c24ca60f96ed87f721d50a492a14985f571
	Oct 10 18:21:59 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:21:59.03335925Z" level=info msg="Removing container: 7dabe06209f98be63856405b59477b58c65b4603d592ee988bdaf873d01115e8" id=e958084b-371d-4dda-840c-714c51fb5329 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 10 18:21:59 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:21:59.043385337Z" level=info msg="Removed container 7dabe06209f98be63856405b59477b58c65b4603d592ee988bdaf873d01115e8: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrzb2/dashboard-metrics-scraper" id=e958084b-371d-4dda-840c-714c51fb5329 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 10 18:22:14 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:14.955397471Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2b7c96f2-7bed-477b-8751-98de9d6984fa name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:22:14 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:14.956452829Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6f445e76-562d-4675-8e01-21ed506a70b6 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:22:14 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:14.957522449Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrzb2/dashboard-metrics-scraper" id=e0838f07-05eb-4e7e-8812-0c2589ae2fca name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:22:14 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:14.957755745Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:22:14 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:14.964139047Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:22:14 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:14.964832359Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:22:14 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:14.993222424Z" level=info msg="Created container 0cc6c1dc24c033825917b1d32a4d483387a1fca52afc3cd70fc26507c34a82dd: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrzb2/dashboard-metrics-scraper" id=e0838f07-05eb-4e7e-8812-0c2589ae2fca name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:22:14 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:14.993893549Z" level=info msg="Starting container: 0cc6c1dc24c033825917b1d32a4d483387a1fca52afc3cd70fc26507c34a82dd" id=aef295e4-5e78-4232-8f29-b2fb04a361b1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 10 18:22:14 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:14.995930736Z" level=info msg="Started container" PID=1761 containerID=0cc6c1dc24c033825917b1d32a4d483387a1fca52afc3cd70fc26507c34a82dd description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrzb2/dashboard-metrics-scraper id=aef295e4-5e78-4232-8f29-b2fb04a361b1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=701908876e25dff2aea789d1c3a91c24ca60f96ed87f721d50a492a14985f571
	Oct 10 18:22:15 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:15.076705795Z" level=info msg="Removing container: 094dac5fc92f16e24f4a7aaa85692acb7aedde698c0d2659f5ef9db3424dff2d" id=5b369dca-8257-463c-912e-314e029c1278 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 10 18:22:15 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:15.087403509Z" level=info msg="Removed container 094dac5fc92f16e24f4a7aaa85692acb7aedde698c0d2659f5ef9db3424dff2d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrzb2/dashboard-metrics-scraper" id=5b369dca-8257-463c-912e-314e029c1278 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 10 18:22:17 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:17.083987327Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=37ade6f9-6e79-4f27-8534-8ad715d917da name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:22:17 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:17.084919404Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=243f0b5a-90e7-4153-96d1-29e6d6c21628 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:22:17 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:17.085952603Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=70b72fbf-a86f-4e09-b0ed-979026ac95da name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:22:17 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:17.086246712Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:22:17 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:17.091486788Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:22:17 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:17.091674829Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/02fb9ae758bdb3ea6d68361dfca5c556428442a4f62c197657c85c7dd8929577/merged/etc/passwd: no such file or directory"
	Oct 10 18:22:17 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:17.091712194Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/02fb9ae758bdb3ea6d68361dfca5c556428442a4f62c197657c85c7dd8929577/merged/etc/group: no such file or directory"
	Oct 10 18:22:17 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:17.091998282Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:22:17 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:17.121332507Z" level=info msg="Created container d3f4f58452fe6cad87bbeefc0306b06959951916beb1588e1284049f7b3f4f98: kube-system/storage-provisioner/storage-provisioner" id=70b72fbf-a86f-4e09-b0ed-979026ac95da name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:22:17 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:17.121930285Z" level=info msg="Starting container: d3f4f58452fe6cad87bbeefc0306b06959951916beb1588e1284049f7b3f4f98" id=3e79961d-df02-4801-89da-74c702c9aefa name=/runtime.v1.RuntimeService/StartContainer
	Oct 10 18:22:17 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:17.124043485Z" level=info msg="Started container" PID=1775 containerID=d3f4f58452fe6cad87bbeefc0306b06959951916beb1588e1284049f7b3f4f98 description=kube-system/storage-provisioner/storage-provisioner id=3e79961d-df02-4801-89da-74c702c9aefa name=/runtime.v1.RuntimeService/StartContainer sandboxID=9eaed1350c29331901b74e3530739c6df8616d773aa02858b52dd37712ea35ba
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	d3f4f58452fe6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   9eaed1350c293       storage-provisioner                                    kube-system
	0cc6c1dc24c03       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   701908876e25d       dashboard-metrics-scraper-6ffb444bf9-mrzb2             kubernetes-dashboard
	f1349f3edaedc       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   17eba9836b269       kubernetes-dashboard-855c9754f9-mb49v                  kubernetes-dashboard
	d5dadd7d16f48       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   6b9f2301af5b8       coredns-66bc5c9577-wrz5v                               kube-system
	a820896e513ae       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   57f322bb066b4       busybox                                                default
	54882de88b25d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           52 seconds ago      Running             kube-proxy                  0                   021e5cf1806d8       kube-proxy-h2mzf                                       kube-system
	c70a052ca72d3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   9eaed1350c293       storage-provisioner                                    kube-system
	6fc01004fca02       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   3d7553e01993a       kindnet-4w475                                          kube-system
	1352ca41b0e76       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           55 seconds ago      Running             kube-scheduler              0                   77b5553db6ce8       kube-scheduler-default-k8s-diff-port-821769            kube-system
	2aeadcb9e03cc       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           55 seconds ago      Running             kube-apiserver              0                   b739aa6250254       kube-apiserver-default-k8s-diff-port-821769            kube-system
	6c6e229b2a831       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           55 seconds ago      Running             etcd                        0                   6581b50a8c49e       etcd-default-k8s-diff-port-821769                      kube-system
	c3f03c923ad68       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           55 seconds ago      Running             kube-controller-manager     0                   68d6bbeb4e21e       kube-controller-manager-default-k8s-diff-port-821769   kube-system
	
	
	==> coredns [d5dadd7d16f48731fcf9902bc7edb1c11a125db6a4169fdb24d901c1afb65224] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34713 - 48103 "HINFO IN 4567185946183888815.8088431449797919716. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.033529641s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-821769
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-821769
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad692bf4ab89f0e135b80e730ae25010479ecc46
	                    minikube.k8s.io/name=default-k8s-diff-port-821769
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_10T18_20_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 10 Oct 2025 18:20:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-821769
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 10 Oct 2025 18:22:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 10 Oct 2025 18:22:36 +0000   Fri, 10 Oct 2025 18:20:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 10 Oct 2025 18:22:36 +0000   Fri, 10 Oct 2025 18:20:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 10 Oct 2025 18:22:36 +0000   Fri, 10 Oct 2025 18:20:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 10 Oct 2025 18:22:36 +0000   Fri, 10 Oct 2025 18:21:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-821769
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 6694834041ede3e9eb1b67e168e90e0c
	  System UUID:                41d605da-1886-46ad-9ac8-df71dd2b8693
	  Boot ID:                    830c8438-99e6-48ba-b543-66e651cad0c8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-wrz5v                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-default-k8s-diff-port-821769                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-4w475                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-default-k8s-diff-port-821769             250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-821769    200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-h2mzf                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-default-k8s-diff-port-821769             100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-mrzb2              0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-mb49v                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 105s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  Starting                 113s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  112s               kubelet          Node default-k8s-diff-port-821769 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s               kubelet          Node default-k8s-diff-port-821769 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s               kubelet          Node default-k8s-diff-port-821769 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           107s               node-controller  Node default-k8s-diff-port-821769 event: Registered Node default-k8s-diff-port-821769 in Controller
	  Normal  NodeReady                95s                kubelet          Node default-k8s-diff-port-821769 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 57s)  kubelet          Node default-k8s-diff-port-821769 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 57s)  kubelet          Node default-k8s-diff-port-821769 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 57s)  kubelet          Node default-k8s-diff-port-821769 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                node-controller  Node default-k8s-diff-port-821769 event: Registered Node default-k8s-diff-port-821769 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 95 0c 3e 92 2e 08 06
	[  +0.052845] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 a5 06 76 2d e3 08 06
	[ +11.354316] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff fa c6 ff 04 55 d6 08 06
	[  +7.101927] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e6 9b 73 27 8c 80 08 06
	[  +0.000350] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 a5 06 76 2d e3 08 06
	[  +6.287191] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 27 2d 28 d6 46 08 06
	[  +0.000293] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa c6 ff 04 55 d6 08 06
	[Oct10 18:19] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 8c 22 f6 6b cf 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e 29 bf 13 20 f9 08 06
	[ +15.511156] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e d6 74 aa 27 d0 08 06
	[  +0.008495] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 af 05 d4 db d1 08 06
	[Oct10 18:20] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 0b 54 33 52 4e 08 06
	[  +0.000597] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 af 05 d4 db d1 08 06
	
	
	==> etcd [6c6e229b2a8311cf4d60aad6c602e02c2923b5ba2309e536076e40579456e8e2] <==
	{"level":"warn","ts":"2025-10-10T18:21:44.786834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.799217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.822446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.829193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.838044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.846887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.858288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.867515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.875866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.885713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.896441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.905072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.913554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.923182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.930295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.938605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.945798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.953077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.960595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.967908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.974807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.988260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.999969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:45.007615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:45.068855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43936","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:22:39 up  1:04,  0 user,  load average: 3.47, 4.31, 2.95
	Linux default-k8s-diff-port-821769 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6fc01004fca02171293288225d03c012204cdc683fe6069b66f91de42b957e10] <==
	I1010 18:21:46.604680       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1010 18:21:46.605700       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1010 18:21:46.605896       1 main.go:148] setting mtu 1500 for CNI 
	I1010 18:21:46.605918       1 main.go:178] kindnetd IP family: "ipv4"
	I1010 18:21:46.605949       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-10T18:21:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1010 18:21:46.809481       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1010 18:21:46.904761       1 controller.go:381] "Waiting for informer caches to sync"
	I1010 18:21:46.905029       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1010 18:21:46.905641       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1010 18:21:47.306233       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1010 18:21:47.306282       1 metrics.go:72] Registering metrics
	I1010 18:21:47.306343       1 controller.go:711] "Syncing nftables rules"
	I1010 18:21:56.810136       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1010 18:21:56.810218       1 main.go:301] handling current node
	I1010 18:22:06.812897       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1010 18:22:06.812940       1 main.go:301] handling current node
	I1010 18:22:16.809636       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1010 18:22:16.809671       1 main.go:301] handling current node
	I1010 18:22:26.812698       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1010 18:22:26.812741       1 main.go:301] handling current node
	I1010 18:22:36.810413       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1010 18:22:36.810458       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2aeadcb9e03cc805af5eff4f1b521299f31e4d618387d10eef543b4e95787f70] <==
	I1010 18:21:45.535745       1 aggregator.go:171] initial CRD sync complete...
	I1010 18:21:45.535754       1 autoregister_controller.go:144] Starting autoregister controller
	I1010 18:21:45.535761       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1010 18:21:45.535768       1 cache.go:39] Caches are synced for autoregister controller
	I1010 18:21:45.535945       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1010 18:21:45.536187       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1010 18:21:45.543186       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1010 18:21:45.553191       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1010 18:21:45.556986       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1010 18:21:45.557046       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1010 18:21:45.567402       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1010 18:21:45.567434       1 policy_source.go:240] refreshing policies
	I1010 18:21:45.590536       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1010 18:21:45.864756       1 controller.go:667] quota admission added evaluator for: namespaces
	I1010 18:21:45.895568       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1010 18:21:45.917401       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1010 18:21:45.926655       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1010 18:21:45.938505       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1010 18:21:45.985896       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.186.132"}
	I1010 18:21:46.000604       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.108.181"}
	I1010 18:21:46.438800       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1010 18:21:48.986865       1 controller.go:667] quota admission added evaluator for: endpoints
	I1010 18:21:49.089910       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1010 18:21:49.436017       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1010 18:21:49.436017       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [c3f03c923ad6830325d9888fdf2ad9de25ac73298e25b5812f72951d65af2eec] <==
	I1010 18:21:48.862313       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1010 18:21:48.864631       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1010 18:21:48.867757       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1010 18:21:48.870658       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1010 18:21:48.872894       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1010 18:21:48.879727       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1010 18:21:48.880688       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1010 18:21:48.880696       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1010 18:21:48.880725       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1010 18:21:48.880879       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1010 18:21:48.881903       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1010 18:21:48.881977       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1010 18:21:48.881945       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1010 18:21:48.882164       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1010 18:21:48.882198       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1010 18:21:48.884398       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1010 18:21:48.884416       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1010 18:21:48.886937       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1010 18:21:48.888133       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1010 18:21:48.890288       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1010 18:21:48.892568       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1010 18:21:48.901011       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1010 18:21:48.901027       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1010 18:21:48.901036       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1010 18:21:48.904085       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [54882de88b25d351cee0feb4833af2c57b273bf1f3a3c88e36f676b1619686cb] <==
	I1010 18:21:46.403774       1 server_linux.go:53] "Using iptables proxy"
	I1010 18:21:46.464387       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1010 18:21:46.565977       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1010 18:21:46.566034       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1010 18:21:46.566188       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1010 18:21:46.594535       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1010 18:21:46.594587       1 server_linux.go:132] "Using iptables Proxier"
	I1010 18:21:46.600801       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1010 18:21:46.601201       1 server.go:527] "Version info" version="v1.34.1"
	I1010 18:21:46.601311       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 18:21:46.603361       1 config.go:309] "Starting node config controller"
	I1010 18:21:46.603380       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1010 18:21:46.603389       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1010 18:21:46.603535       1 config.go:200] "Starting service config controller"
	I1010 18:21:46.603547       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1010 18:21:46.603569       1 config.go:106] "Starting endpoint slice config controller"
	I1010 18:21:46.603574       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1010 18:21:46.603588       1 config.go:403] "Starting serviceCIDR config controller"
	I1010 18:21:46.603593       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1010 18:21:46.704252       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1010 18:21:46.704269       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1010 18:21:46.704298       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1352ca41b0e7626fbf6ee43638506dfab18bd157572e9128f411ac1c5ae54538] <==
	I1010 18:21:44.236987       1 serving.go:386] Generated self-signed cert in-memory
	W1010 18:21:45.470219       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1010 18:21:45.470322       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1010 18:21:45.470338       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1010 18:21:45.470348       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1010 18:21:45.514646       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1010 18:21:45.516830       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 18:21:45.520269       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1010 18:21:45.520307       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1010 18:21:45.521274       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1010 18:21:45.521367       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1010 18:21:45.620801       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 10 18:21:49 default-k8s-diff-port-821769 kubelet[728]: I1010 18:21:49.407978     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hkc9\" (UniqueName: \"kubernetes.io/projected/a8003814-45ef-4392-9b70-b82abb06ac1f-kube-api-access-7hkc9\") pod \"dashboard-metrics-scraper-6ffb444bf9-mrzb2\" (UID: \"a8003814-45ef-4392-9b70-b82abb06ac1f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrzb2"
	Oct 10 18:21:49 default-k8s-diff-port-821769 kubelet[728]: I1010 18:21:49.408003     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/25ca2305-7568-48a1-bd71-8dbb16bb832b-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-mb49v\" (UID: \"25ca2305-7568-48a1-bd71-8dbb16bb832b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mb49v"
	Oct 10 18:21:52 default-k8s-diff-port-821769 kubelet[728]: I1010 18:21:52.754665     728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 10 18:21:55 default-k8s-diff-port-821769 kubelet[728]: I1010 18:21:55.037081     728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mb49v" podStartSLOduration=1.443374894 podStartE2EDuration="6.037041407s" podCreationTimestamp="2025-10-10 18:21:49 +0000 UTC" firstStartedPulling="2025-10-10 18:21:49.655416613 +0000 UTC m=+6.794791035" lastFinishedPulling="2025-10-10 18:21:54.249083141 +0000 UTC m=+11.388457548" observedRunningTime="2025-10-10 18:21:55.036381285 +0000 UTC m=+12.175755719" watchObservedRunningTime="2025-10-10 18:21:55.037041407 +0000 UTC m=+12.176415834"
	Oct 10 18:21:58 default-k8s-diff-port-821769 kubelet[728]: I1010 18:21:58.025283     728 scope.go:117] "RemoveContainer" containerID="7dabe06209f98be63856405b59477b58c65b4603d592ee988bdaf873d01115e8"
	Oct 10 18:21:59 default-k8s-diff-port-821769 kubelet[728]: I1010 18:21:59.031771     728 scope.go:117] "RemoveContainer" containerID="7dabe06209f98be63856405b59477b58c65b4603d592ee988bdaf873d01115e8"
	Oct 10 18:21:59 default-k8s-diff-port-821769 kubelet[728]: I1010 18:21:59.031981     728 scope.go:117] "RemoveContainer" containerID="094dac5fc92f16e24f4a7aaa85692acb7aedde698c0d2659f5ef9db3424dff2d"
	Oct 10 18:21:59 default-k8s-diff-port-821769 kubelet[728]: E1010 18:21:59.032212     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mrzb2_kubernetes-dashboard(a8003814-45ef-4392-9b70-b82abb06ac1f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrzb2" podUID="a8003814-45ef-4392-9b70-b82abb06ac1f"
	Oct 10 18:22:00 default-k8s-diff-port-821769 kubelet[728]: I1010 18:22:00.036493     728 scope.go:117] "RemoveContainer" containerID="094dac5fc92f16e24f4a7aaa85692acb7aedde698c0d2659f5ef9db3424dff2d"
	Oct 10 18:22:00 default-k8s-diff-port-821769 kubelet[728]: E1010 18:22:00.036679     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mrzb2_kubernetes-dashboard(a8003814-45ef-4392-9b70-b82abb06ac1f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrzb2" podUID="a8003814-45ef-4392-9b70-b82abb06ac1f"
	Oct 10 18:22:01 default-k8s-diff-port-821769 kubelet[728]: I1010 18:22:01.040079     728 scope.go:117] "RemoveContainer" containerID="094dac5fc92f16e24f4a7aaa85692acb7aedde698c0d2659f5ef9db3424dff2d"
	Oct 10 18:22:01 default-k8s-diff-port-821769 kubelet[728]: E1010 18:22:01.040811     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mrzb2_kubernetes-dashboard(a8003814-45ef-4392-9b70-b82abb06ac1f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrzb2" podUID="a8003814-45ef-4392-9b70-b82abb06ac1f"
	Oct 10 18:22:14 default-k8s-diff-port-821769 kubelet[728]: I1010 18:22:14.954820     728 scope.go:117] "RemoveContainer" containerID="094dac5fc92f16e24f4a7aaa85692acb7aedde698c0d2659f5ef9db3424dff2d"
	Oct 10 18:22:15 default-k8s-diff-port-821769 kubelet[728]: I1010 18:22:15.075325     728 scope.go:117] "RemoveContainer" containerID="094dac5fc92f16e24f4a7aaa85692acb7aedde698c0d2659f5ef9db3424dff2d"
	Oct 10 18:22:15 default-k8s-diff-port-821769 kubelet[728]: I1010 18:22:15.075625     728 scope.go:117] "RemoveContainer" containerID="0cc6c1dc24c033825917b1d32a4d483387a1fca52afc3cd70fc26507c34a82dd"
	Oct 10 18:22:15 default-k8s-diff-port-821769 kubelet[728]: E1010 18:22:15.075830     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mrzb2_kubernetes-dashboard(a8003814-45ef-4392-9b70-b82abb06ac1f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrzb2" podUID="a8003814-45ef-4392-9b70-b82abb06ac1f"
	Oct 10 18:22:17 default-k8s-diff-port-821769 kubelet[728]: I1010 18:22:17.083615     728 scope.go:117] "RemoveContainer" containerID="c70a052ca72d3fcf8221f750d50a9946693d80a13afebd88510aad7b927f385b"
	Oct 10 18:22:20 default-k8s-diff-port-821769 kubelet[728]: I1010 18:22:20.674368     728 scope.go:117] "RemoveContainer" containerID="0cc6c1dc24c033825917b1d32a4d483387a1fca52afc3cd70fc26507c34a82dd"
	Oct 10 18:22:20 default-k8s-diff-port-821769 kubelet[728]: E1010 18:22:20.674533     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mrzb2_kubernetes-dashboard(a8003814-45ef-4392-9b70-b82abb06ac1f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrzb2" podUID="a8003814-45ef-4392-9b70-b82abb06ac1f"
	Oct 10 18:22:34 default-k8s-diff-port-821769 kubelet[728]: I1010 18:22:34.956330     728 scope.go:117] "RemoveContainer" containerID="0cc6c1dc24c033825917b1d32a4d483387a1fca52afc3cd70fc26507c34a82dd"
	Oct 10 18:22:34 default-k8s-diff-port-821769 kubelet[728]: E1010 18:22:34.956531     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mrzb2_kubernetes-dashboard(a8003814-45ef-4392-9b70-b82abb06ac1f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrzb2" podUID="a8003814-45ef-4392-9b70-b82abb06ac1f"
	Oct 10 18:22:36 default-k8s-diff-port-821769 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 10 18:22:36 default-k8s-diff-port-821769 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 10 18:22:36 default-k8s-diff-port-821769 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 10 18:22:36 default-k8s-diff-port-821769 systemd[1]: kubelet.service: Consumed 1.675s CPU time.
	
	
	==> kubernetes-dashboard [f1349f3edaedc69a7aa332fe3f3662c37e7ed235777aff61417cc51c8e32a81e] <==
	2025/10/10 18:21:54 Using namespace: kubernetes-dashboard
	2025/10/10 18:21:54 Using in-cluster config to connect to apiserver
	2025/10/10 18:21:54 Using secret token for csrf signing
	2025/10/10 18:21:54 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/10 18:21:54 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/10 18:21:54 Successful initial request to the apiserver, version: v1.34.1
	2025/10/10 18:21:54 Generating JWE encryption key
	2025/10/10 18:21:54 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/10 18:21:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/10 18:21:55 Initializing JWE encryption key from synchronized object
	2025/10/10 18:21:55 Creating in-cluster Sidecar client
	2025/10/10 18:21:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/10 18:21:55 Serving insecurely on HTTP port: 9090
	2025/10/10 18:22:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/10 18:21:54 Starting overwatch
	
	
	==> storage-provisioner [c70a052ca72d3fcf8221f750d50a9946693d80a13afebd88510aad7b927f385b] <==
	I1010 18:21:46.361129       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1010 18:22:16.365509       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d3f4f58452fe6cad87bbeefc0306b06959951916beb1588e1284049f7b3f4f98] <==
	I1010 18:22:17.137343       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1010 18:22:17.145922       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1010 18:22:17.145964       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1010 18:22:17.148285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:22:20.603581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:22:24.864738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:22:28.462716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:22:31.516730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:22:34.538971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:22:34.545099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1010 18:22:34.545280       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1010 18:22:34.545345       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"acf2fe9b-472b-4115-89d9-0092fd7e1fc6", APIVersion:"v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-821769_1aa1f45c-d1f7-4ddb-b271-91df6940e918 became leader
	I1010 18:22:34.545448       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-821769_1aa1f45c-d1f7-4ddb-b271-91df6940e918!
	W1010 18:22:34.547267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:22:34.550730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1010 18:22:34.646267       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-821769_1aa1f45c-d1f7-4ddb-b271-91df6940e918!
	W1010 18:22:36.553962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:22:36.559342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:22:38.562453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:22:38.567153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-821769 -n default-k8s-diff-port-821769
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-821769 -n default-k8s-diff-port-821769: exit status 2 (301.393205ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-821769 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-821769
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-821769:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "92545ee0c99825b76f4f4b9fc8a4b4ba2aa46e2125731312ddee69c03ebd0166",
	        "Created": "2025-10-10T18:20:31.085915858Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 326126,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-10T18:21:36.695322482Z",
	            "FinishedAt": "2025-10-10T18:21:33.527040287Z"
	        },
	        "Image": "sha256:84da1fc78d37190122f56c520913b0bfc454516bc5fdbdc209e2a5258afce8c3",
	        "ResolvConfPath": "/var/lib/docker/containers/92545ee0c99825b76f4f4b9fc8a4b4ba2aa46e2125731312ddee69c03ebd0166/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/92545ee0c99825b76f4f4b9fc8a4b4ba2aa46e2125731312ddee69c03ebd0166/hostname",
	        "HostsPath": "/var/lib/docker/containers/92545ee0c99825b76f4f4b9fc8a4b4ba2aa46e2125731312ddee69c03ebd0166/hosts",
	        "LogPath": "/var/lib/docker/containers/92545ee0c99825b76f4f4b9fc8a4b4ba2aa46e2125731312ddee69c03ebd0166/92545ee0c99825b76f4f4b9fc8a4b4ba2aa46e2125731312ddee69c03ebd0166-json.log",
	        "Name": "/default-k8s-diff-port-821769",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-821769:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-821769",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "92545ee0c99825b76f4f4b9fc8a4b4ba2aa46e2125731312ddee69c03ebd0166",
	                "LowerDir": "/var/lib/docker/overlay2/66bee21f5730501e8e73927b89befd253dad8df05381d41144b0a046ca5a7701-init/diff:/var/lib/docker/overlay2/9995a0af7efc4d83e8e62526a6cf13ffc5df3bab5cee59077c863040f7e3e58d/diff",
	                "MergedDir": "/var/lib/docker/overlay2/66bee21f5730501e8e73927b89befd253dad8df05381d41144b0a046ca5a7701/merged",
	                "UpperDir": "/var/lib/docker/overlay2/66bee21f5730501e8e73927b89befd253dad8df05381d41144b0a046ca5a7701/diff",
	                "WorkDir": "/var/lib/docker/overlay2/66bee21f5730501e8e73927b89befd253dad8df05381d41144b0a046ca5a7701/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-821769",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-821769/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-821769",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-821769",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-821769",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3fded747e14a091ca7b858217b8414eaed4e157cb33c014d3875c0347f3c69f4",
	            "SandboxKey": "/var/run/docker/netns/3fded747e14a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-821769": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:f2:bd:a3:5d:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "24e5e8e22680fb22d88f869caaf5ecac6707c168b04786cc68232728a1674899",
	                    "EndpointID": "8ddb1e07101856a2b97cabe4d9871c8d8d7f8ee5cef61642d45268322c57364a",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-821769",
	                        "92545ee0c998"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-821769 -n default-k8s-diff-port-821769
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-821769 -n default-k8s-diff-port-821769: exit status 2 (303.339819ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-821769 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-821769 logs -n 25: (1.021615542s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ old-k8s-version-141193 image list --format=json                                                                                                                                                                                               │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ pause   │ -p old-k8s-version-141193 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ delete  │ -p old-k8s-version-141193                                                                                                                                                                                                                     │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ delete  │ -p old-k8s-version-141193                                                                                                                                                                                                                     │ old-k8s-version-141193       │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ start   │ -p newest-cni-121129 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:22 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-821769 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ start   │ -p default-k8s-diff-port-821769 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:22 UTC │
	│ image   │ no-preload-556024 image list --format=json                                                                                                                                                                                                    │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ pause   │ -p no-preload-556024 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ image   │ embed-certs-472518 image list --format=json                                                                                                                                                                                                   │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ pause   │ -p embed-certs-472518 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │                     │
	│ delete  │ -p no-preload-556024                                                                                                                                                                                                                          │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ delete  │ -p no-preload-556024                                                                                                                                                                                                                          │ no-preload-556024            │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ delete  │ -p embed-certs-472518                                                                                                                                                                                                                         │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ delete  │ -p embed-certs-472518                                                                                                                                                                                                                         │ embed-certs-472518           │ jenkins │ v1.37.0 │ 10 Oct 25 18:21 UTC │ 10 Oct 25 18:21 UTC │
	│ addons  │ enable metrics-server -p newest-cni-121129 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:22 UTC │                     │
	│ stop    │ -p newest-cni-121129 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:22 UTC │ 10 Oct 25 18:22 UTC │
	│ addons  │ enable dashboard -p newest-cni-121129 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:22 UTC │ 10 Oct 25 18:22 UTC │
	│ start   │ -p newest-cni-121129 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:22 UTC │ 10 Oct 25 18:22 UTC │
	│ image   │ newest-cni-121129 image list --format=json                                                                                                                                                                                                    │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:22 UTC │ 10 Oct 25 18:22 UTC │
	│ pause   │ -p newest-cni-121129 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:22 UTC │                     │
	│ delete  │ -p newest-cni-121129                                                                                                                                                                                                                          │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:22 UTC │ 10 Oct 25 18:22 UTC │
	│ delete  │ -p newest-cni-121129                                                                                                                                                                                                                          │ newest-cni-121129            │ jenkins │ v1.37.0 │ 10 Oct 25 18:22 UTC │ 10 Oct 25 18:22 UTC │
	│ image   │ default-k8s-diff-port-821769 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:22 UTC │ 10 Oct 25 18:22 UTC │
	│ pause   │ -p default-k8s-diff-port-821769 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-821769 │ jenkins │ v1.37.0 │ 10 Oct 25 18:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/10 18:22:05
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 18:22:05.290569  335513 out.go:360] Setting OutFile to fd 1 ...
	I1010 18:22:05.290861  335513 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:22:05.290870  335513 out.go:374] Setting ErrFile to fd 2...
	I1010 18:22:05.290877  335513 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:22:05.291147  335513 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 18:22:05.291697  335513 out.go:368] Setting JSON to false
	I1010 18:22:05.292906  335513 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3865,"bootTime":1760116660,"procs":265,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 18:22:05.293008  335513 start.go:141] virtualization: kvm guest
	I1010 18:22:05.294961  335513 out.go:179] * [newest-cni-121129] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1010 18:22:05.296259  335513 out.go:179]   - MINIKUBE_LOCATION=21724
	I1010 18:22:05.296288  335513 notify.go:220] Checking for updates...
	I1010 18:22:05.298639  335513 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 18:22:05.299676  335513 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:22:05.300690  335513 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-5815/.minikube
	I1010 18:22:05.301797  335513 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 18:22:05.302929  335513 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 18:22:05.307755  335513 config.go:182] Loaded profile config "newest-cni-121129": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:22:05.308318  335513 driver.go:421] Setting default libvirt URI to qemu:///system
	I1010 18:22:05.332954  335513 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1010 18:22:05.333071  335513 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:22:05.393251  335513 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-10 18:22:05.383186457 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:22:05.393422  335513 docker.go:318] overlay module found
	I1010 18:22:05.395166  335513 out.go:179] * Using the docker driver based on existing profile
	I1010 18:22:05.396306  335513 start.go:305] selected driver: docker
	I1010 18:22:05.396321  335513 start.go:925] validating driver "docker" against &{Name:newest-cni-121129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:22:05.396438  335513 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 18:22:05.397122  335513 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:22:05.458840  335513 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-10 18:22:05.448230468 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:22:05.459176  335513 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1010 18:22:05.459216  335513 cni.go:84] Creating CNI manager for ""
	I1010 18:22:05.459260  335513 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:22:05.459302  335513 start.go:349] cluster config:
	{Name:newest-cni-121129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:22:05.461891  335513 out.go:179] * Starting "newest-cni-121129" primary control-plane node in "newest-cni-121129" cluster
	I1010 18:22:05.462953  335513 cache.go:123] Beginning downloading kic base image for docker with crio
	I1010 18:22:05.464080  335513 out.go:179] * Pulling base image v0.0.48-1760103811-21724 ...
	I1010 18:22:05.465182  335513 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:22:05.465219  335513 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1010 18:22:05.465239  335513 cache.go:58] Caching tarball of preloaded images
	I1010 18:22:05.465271  335513 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon
	I1010 18:22:05.465353  335513 preload.go:233] Found /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1010 18:22:05.465368  335513 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1010 18:22:05.465464  335513 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/config.json ...
	I1010 18:22:05.486563  335513 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon, skipping pull
	I1010 18:22:05.486586  335513 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 exists in daemon, skipping load
	I1010 18:22:05.486605  335513 cache.go:232] Successfully downloaded all kic artifacts
	I1010 18:22:05.486632  335513 start.go:360] acquireMachinesLock for newest-cni-121129: {Name:mkd067d67013b78a79cc31e2d50fcfd69790fc6c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1010 18:22:05.486702  335513 start.go:364] duration metric: took 48.282µs to acquireMachinesLock for "newest-cni-121129"
	I1010 18:22:05.486725  335513 start.go:96] Skipping create...Using existing machine configuration
	I1010 18:22:05.486733  335513 fix.go:54] fixHost starting: 
	I1010 18:22:05.486937  335513 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:22:05.505160  335513 fix.go:112] recreateIfNeeded on newest-cni-121129: state=Stopped err=<nil>
	W1010 18:22:05.505189  335513 fix.go:138] unexpected machine state, will restart: <nil>
	W1010 18:22:02.659629  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	W1010 18:22:04.660067  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	I1010 18:22:05.506971  335513 out.go:252] * Restarting existing docker container for "newest-cni-121129" ...
	I1010 18:22:05.507082  335513 cli_runner.go:164] Run: docker start newest-cni-121129
	I1010 18:22:05.744340  335513 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:22:05.763128  335513 kic.go:430] container "newest-cni-121129" state is running.
	I1010 18:22:05.763484  335513 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-121129
	I1010 18:22:05.782418  335513 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/config.json ...
	I1010 18:22:05.782704  335513 machine.go:93] provisionDockerMachine start ...
	I1010 18:22:05.782787  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:05.801168  335513 main.go:141] libmachine: Using SSH client type: native
	I1010 18:22:05.801379  335513 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1010 18:22:05.801392  335513 main.go:141] libmachine: About to run SSH command:
	hostname
	I1010 18:22:05.802022  335513 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53092->127.0.0.1:33133: read: connection reset by peer
	I1010 18:22:08.938319  335513 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-121129
	
	I1010 18:22:08.938347  335513 ubuntu.go:182] provisioning hostname "newest-cni-121129"
	I1010 18:22:08.938432  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:08.956809  335513 main.go:141] libmachine: Using SSH client type: native
	I1010 18:22:08.957009  335513 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1010 18:22:08.957024  335513 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-121129 && echo "newest-cni-121129" | sudo tee /etc/hostname
	I1010 18:22:09.102637  335513 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-121129
	
	I1010 18:22:09.102708  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:09.121495  335513 main.go:141] libmachine: Using SSH client type: native
	I1010 18:22:09.121708  335513 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1010 18:22:09.121725  335513 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-121129' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-121129/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-121129' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1010 18:22:09.255802  335513 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1010 18:22:09.255838  335513 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21724-5815/.minikube CaCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21724-5815/.minikube}
	I1010 18:22:09.255877  335513 ubuntu.go:190] setting up certificates
	I1010 18:22:09.255893  335513 provision.go:84] configureAuth start
	I1010 18:22:09.255959  335513 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-121129
	I1010 18:22:09.273223  335513 provision.go:143] copyHostCerts
	I1010 18:22:09.273280  335513 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem, removing ...
	I1010 18:22:09.273293  335513 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem
	I1010 18:22:09.273359  335513 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/ca.pem (1082 bytes)
	I1010 18:22:09.273459  335513 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem, removing ...
	I1010 18:22:09.273468  335513 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem
	I1010 18:22:09.273494  335513 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/cert.pem (1123 bytes)
	I1010 18:22:09.273561  335513 exec_runner.go:144] found /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem, removing ...
	I1010 18:22:09.273568  335513 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem
	I1010 18:22:09.273591  335513 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21724-5815/.minikube/key.pem (1675 bytes)
	I1010 18:22:09.273652  335513 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem org=jenkins.newest-cni-121129 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-121129]
	I1010 18:22:09.612120  335513 provision.go:177] copyRemoteCerts
	I1010 18:22:09.612187  335513 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1010 18:22:09.612221  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:09.629812  335513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:22:09.726962  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1010 18:22:09.746555  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1010 18:22:09.766845  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1010 18:22:09.786986  335513 provision.go:87] duration metric: took 531.066176ms to configureAuth
	I1010 18:22:09.787015  335513 ubuntu.go:206] setting minikube options for container-runtime
	I1010 18:22:09.787209  335513 config.go:182] Loaded profile config "newest-cni-121129": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:22:09.787337  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:09.805200  335513 main.go:141] libmachine: Using SSH client type: native
	I1010 18:22:09.805389  335513 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1010 18:22:09.805406  335513 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1010 18:22:10.098222  335513 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1010 18:22:10.098249  335513 machine.go:96] duration metric: took 4.31552528s to provisionDockerMachine
	I1010 18:22:10.098261  335513 start.go:293] postStartSetup for "newest-cni-121129" (driver="docker")
	I1010 18:22:10.098276  335513 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1010 18:22:10.098357  335513 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1010 18:22:10.098407  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:10.115908  335513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:22:10.213790  335513 ssh_runner.go:195] Run: cat /etc/os-release
	I1010 18:22:10.217524  335513 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1010 18:22:10.217553  335513 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1010 18:22:10.217567  335513 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/addons for local assets ...
	I1010 18:22:10.217636  335513 filesync.go:126] Scanning /home/jenkins/minikube-integration/21724-5815/.minikube/files for local assets ...
	I1010 18:22:10.217740  335513 filesync.go:149] local asset: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem -> 93542.pem in /etc/ssl/certs
	I1010 18:22:10.217864  335513 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1010 18:22:10.226684  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:22:10.248076  335513 start.go:296] duration metric: took 149.799111ms for postStartSetup
	I1010 18:22:10.248178  335513 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1010 18:22:10.248226  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:10.266300  335513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:22:10.360213  335513 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1010 18:22:10.364797  335513 fix.go:56] duration metric: took 4.878059137s for fixHost
	I1010 18:22:10.364821  335513 start.go:83] releasing machines lock for "newest-cni-121129", held for 4.878105914s
	I1010 18:22:10.364878  335513 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-121129
	I1010 18:22:10.383110  335513 ssh_runner.go:195] Run: cat /version.json
	I1010 18:22:10.383169  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:10.383208  335513 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1010 18:22:10.383290  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:10.401694  335513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:22:10.402069  335513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:22:10.549004  335513 ssh_runner.go:195] Run: systemctl --version
	I1010 18:22:10.555440  335513 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1010 18:22:10.589903  335513 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1010 18:22:10.594487  335513 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1010 18:22:10.594552  335513 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1010 18:22:10.603402  335513 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1010 18:22:10.603427  335513 start.go:495] detecting cgroup driver to use...
	I1010 18:22:10.603462  335513 detect.go:190] detected "systemd" cgroup driver on host os
	I1010 18:22:10.603516  335513 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1010 18:22:10.617988  335513 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1010 18:22:10.630757  335513 docker.go:218] disabling cri-docker service (if available) ...
	I1010 18:22:10.630811  335513 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1010 18:22:10.645086  335513 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1010 18:22:10.659116  335513 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1010 18:22:10.739783  335513 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1010 18:22:10.821838  335513 docker.go:234] disabling docker service ...
	I1010 18:22:10.821898  335513 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1010 18:22:10.836530  335513 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1010 18:22:10.849438  335513 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1010 18:22:10.926810  335513 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1010 18:22:11.009431  335513 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1010 18:22:11.022257  335513 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1010 18:22:11.037720  335513 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1010 18:22:11.037792  335513 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:22:11.047811  335513 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1010 18:22:11.047875  335513 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:22:11.057692  335513 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:22:11.067884  335513 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:22:11.077525  335513 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1010 18:22:11.087002  335513 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:22:11.096828  335513 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:22:11.106020  335513 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1010 18:22:11.115595  335513 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1010 18:22:11.123688  335513 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1010 18:22:11.131885  335513 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:22:11.209940  335513 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1010 18:22:11.350816  335513 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1010 18:22:11.350877  335513 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1010 18:22:11.355096  335513 start.go:563] Will wait 60s for crictl version
	I1010 18:22:11.355145  335513 ssh_runner.go:195] Run: which crictl
	I1010 18:22:11.358770  335513 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1010 18:22:11.384561  335513 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1010 18:22:11.384639  335513 ssh_runner.go:195] Run: crio --version
	I1010 18:22:11.411320  335513 ssh_runner.go:195] Run: crio --version
	I1010 18:22:11.440045  335513 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1010 18:22:06.661425  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	W1010 18:22:09.158000  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	W1010 18:22:11.160422  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	I1010 18:22:11.441103  335513 cli_runner.go:164] Run: docker network inspect newest-cni-121129 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1010 18:22:11.458538  335513 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1010 18:22:11.462704  335513 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:22:11.475134  335513 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1010 18:22:11.476017  335513 kubeadm.go:883] updating cluster {Name:newest-cni-121129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1010 18:22:11.476150  335513 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 18:22:11.476202  335513 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:22:11.507304  335513 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:22:11.507323  335513 crio.go:433] Images already preloaded, skipping extraction
	I1010 18:22:11.507363  335513 ssh_runner.go:195] Run: sudo crictl images --output json
	I1010 18:22:11.533243  335513 crio.go:514] all images are preloaded for cri-o runtime.
	I1010 18:22:11.533265  335513 cache_images.go:85] Images are preloaded, skipping loading
	I1010 18:22:11.533272  335513 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1010 18:22:11.533353  335513 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-121129 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1010 18:22:11.533416  335513 ssh_runner.go:195] Run: crio config
	I1010 18:22:11.578761  335513 cni.go:84] Creating CNI manager for ""
	I1010 18:22:11.578789  335513 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 18:22:11.578804  335513 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1010 18:22:11.578824  335513 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-121129 NodeName:newest-cni-121129 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1010 18:22:11.578929  335513 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-121129"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1010 18:22:11.578984  335513 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1010 18:22:11.587839  335513 binaries.go:44] Found k8s binaries, skipping transfer
	I1010 18:22:11.587894  335513 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1010 18:22:11.596414  335513 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1010 18:22:11.610238  335513 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1010 18:22:11.623960  335513 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1010 18:22:11.637763  335513 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1010 18:22:11.641378  335513 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1010 18:22:11.652228  335513 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:22:11.733285  335513 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:22:11.757177  335513 certs.go:69] Setting up /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129 for IP: 192.168.85.2
	I1010 18:22:11.757199  335513 certs.go:195] generating shared ca certs ...
	I1010 18:22:11.757219  335513 certs.go:227] acquiring lock for ca certs: {Name:mkd2ebf34e0d6ec3a7809bed8325fdc7fe2fcc31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:22:11.757370  335513 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key
	I1010 18:22:11.757429  335513 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key
	I1010 18:22:11.757441  335513 certs.go:257] generating profile certs ...
	I1010 18:22:11.757572  335513 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/client.key
	I1010 18:22:11.757653  335513 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key.89f266b7
	I1010 18:22:11.757703  335513 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.key
	I1010 18:22:11.757835  335513 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem (1338 bytes)
	W1010 18:22:11.757872  335513 certs.go:480] ignoring /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354_empty.pem, impossibly tiny 0 bytes
	I1010 18:22:11.757885  335513 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca-key.pem (1675 bytes)
	I1010 18:22:11.757915  335513 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/ca.pem (1082 bytes)
	I1010 18:22:11.757954  335513 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/cert.pem (1123 bytes)
	I1010 18:22:11.757981  335513 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/certs/key.pem (1675 bytes)
	I1010 18:22:11.758033  335513 certs.go:484] found cert: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem (1708 bytes)
	I1010 18:22:11.758775  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1010 18:22:11.778857  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1010 18:22:11.801208  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1010 18:22:11.824760  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1010 18:22:11.851378  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1010 18:22:11.870850  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1010 18:22:11.889951  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1010 18:22:11.908879  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/newest-cni-121129/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1010 18:22:11.928343  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/ssl/certs/93542.pem --> /usr/share/ca-certificates/93542.pem (1708 bytes)
	I1010 18:22:11.948375  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1010 18:22:11.969276  335513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21724-5815/.minikube/certs/9354.pem --> /usr/share/ca-certificates/9354.pem (1338 bytes)
	I1010 18:22:11.988998  335513 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1010 18:22:12.003247  335513 ssh_runner.go:195] Run: openssl version
	I1010 18:22:12.009554  335513 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1010 18:22:12.018800  335513 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:22:12.022724  335513 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 10 17:30 /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:22:12.022777  335513 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1010 18:22:12.057604  335513 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1010 18:22:12.067287  335513 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9354.pem && ln -fs /usr/share/ca-certificates/9354.pem /etc/ssl/certs/9354.pem"
	I1010 18:22:12.076762  335513 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9354.pem
	I1010 18:22:12.080550  335513 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 10 17:36 /usr/share/ca-certificates/9354.pem
	I1010 18:22:12.080594  335513 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9354.pem
	I1010 18:22:12.114518  335513 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9354.pem /etc/ssl/certs/51391683.0"
	I1010 18:22:12.123583  335513 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93542.pem && ln -fs /usr/share/ca-certificates/93542.pem /etc/ssl/certs/93542.pem"
	I1010 18:22:12.132960  335513 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93542.pem
	I1010 18:22:12.137033  335513 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 10 17:36 /usr/share/ca-certificates/93542.pem
	I1010 18:22:12.137103  335513 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93542.pem
	I1010 18:22:12.172587  335513 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93542.pem /etc/ssl/certs/3ec20f2e.0"
	I1010 18:22:12.181976  335513 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1010 18:22:12.185849  335513 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1010 18:22:12.220072  335513 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1010 18:22:12.255822  335513 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1010 18:22:12.300141  335513 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1010 18:22:12.343441  335513 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1010 18:22:12.393734  335513 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1010 18:22:12.454003  335513 kubeadm.go:400] StartCluster: {Name:newest-cni-121129 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-121129 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 18:22:12.454110  335513 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1010 18:22:12.454196  335513 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1010 18:22:12.487305  335513 cri.go:89] found id: "7f03778cf9929180d97b99c5a7dabc1b07cab95c28d238404c4b3cdda1350b21"
	I1010 18:22:12.487332  335513 cri.go:89] found id: "bf112ce4d768b53d1e90f30761c3ce870d54e55a9c7241326c2c1e377046fb0b"
	I1010 18:22:12.487338  335513 cri.go:89] found id: "ab61bce748bfcc69bd3fc766155054b877fa6b8c7695ee04c693a7820d3e6b33"
	I1010 18:22:12.487343  335513 cri.go:89] found id: "ef0f16a1ff912c99555175b679ca7c2499386f3f7c4b4c9a7270a180e8c15937"
	I1010 18:22:12.487347  335513 cri.go:89] found id: ""
	I1010 18:22:12.487394  335513 ssh_runner.go:195] Run: sudo runc list -f json
	W1010 18:22:12.500489  335513 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T18:22:12Z" level=error msg="open /run/runc: no such file or directory"
	I1010 18:22:12.500556  335513 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1010 18:22:12.509425  335513 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1010 18:22:12.509447  335513 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1010 18:22:12.509493  335513 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1010 18:22:12.518026  335513 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1010 18:22:12.518736  335513 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-121129" does not appear in /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:22:12.519045  335513 kubeconfig.go:62] /home/jenkins/minikube-integration/21724-5815/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-121129" cluster setting kubeconfig missing "newest-cni-121129" context setting]
	I1010 18:22:12.519593  335513 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:22:12.520854  335513 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1010 18:22:12.530013  335513 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1010 18:22:12.530074  335513 kubeadm.go:601] duration metric: took 20.594831ms to restartPrimaryControlPlane
	I1010 18:22:12.530092  335513 kubeadm.go:402] duration metric: took 76.095724ms to StartCluster
	I1010 18:22:12.530115  335513 settings.go:142] acquiring lock: {Name:mk32701f7c6313a55b8740f0862889585a36e8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:22:12.530186  335513 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:22:12.530994  335513 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/kubeconfig: {Name:mkcfa26dc30ed66c4aea3c4fa1d10a3ec1beddb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 18:22:12.531256  335513 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1010 18:22:12.531320  335513 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1010 18:22:12.531440  335513 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-121129"
	I1010 18:22:12.531451  335513 addons.go:69] Setting dashboard=true in profile "newest-cni-121129"
	I1010 18:22:12.531464  335513 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-121129"
	I1010 18:22:12.531470  335513 addons.go:238] Setting addon dashboard=true in "newest-cni-121129"
	W1010 18:22:12.531473  335513 addons.go:247] addon storage-provisioner should already be in state true
	W1010 18:22:12.531478  335513 addons.go:247] addon dashboard should already be in state true
	I1010 18:22:12.531482  335513 config.go:182] Loaded profile config "newest-cni-121129": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:22:12.531493  335513 addons.go:69] Setting default-storageclass=true in profile "newest-cni-121129"
	I1010 18:22:12.531516  335513 host.go:66] Checking if "newest-cni-121129" exists ...
	I1010 18:22:12.531531  335513 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-121129"
	I1010 18:22:12.531504  335513 host.go:66] Checking if "newest-cni-121129" exists ...
	I1010 18:22:12.531869  335513 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:22:12.532071  335513 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:22:12.532071  335513 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:22:12.535048  335513 out.go:179] * Verifying Kubernetes components...
	I1010 18:22:12.536132  335513 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1010 18:22:12.558523  335513 addons.go:238] Setting addon default-storageclass=true in "newest-cni-121129"
	W1010 18:22:12.558549  335513 addons.go:247] addon default-storageclass should already be in state true
	I1010 18:22:12.558578  335513 host.go:66] Checking if "newest-cni-121129" exists ...
	I1010 18:22:12.558631  335513 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1010 18:22:12.559047  335513 cli_runner.go:164] Run: docker container inspect newest-cni-121129 --format={{.State.Status}}
	I1010 18:22:12.559747  335513 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1010 18:22:12.560780  335513 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1010 18:22:12.560840  335513 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:22:12.560860  335513 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1010 18:22:12.560910  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:12.565594  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1010 18:22:12.565614  335513 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1010 18:22:12.565676  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:12.591733  335513 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1010 18:22:12.591757  335513 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1010 18:22:12.591901  335513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-121129
	I1010 18:22:12.596614  335513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:22:12.597384  335513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:22:12.615727  335513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/newest-cni-121129/id_rsa Username:docker}
	I1010 18:22:12.677916  335513 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1010 18:22:12.692091  335513 api_server.go:52] waiting for apiserver process to appear ...
	I1010 18:22:12.692167  335513 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 18:22:12.705996  335513 api_server.go:72] duration metric: took 174.708821ms to wait for apiserver process to appear ...
	I1010 18:22:12.706031  335513 api_server.go:88] waiting for apiserver healthz status ...
	I1010 18:22:12.706074  335513 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1010 18:22:12.762093  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1010 18:22:12.762118  335513 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1010 18:22:12.763137  335513 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1010 18:22:12.775071  335513 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1010 18:22:12.780905  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1010 18:22:12.780927  335513 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1010 18:22:12.802455  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1010 18:22:12.802487  335513 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1010 18:22:12.823607  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1010 18:22:12.823636  335513 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1010 18:22:12.839919  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1010 18:22:12.839944  335513 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1010 18:22:12.856483  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1010 18:22:12.856511  335513 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1010 18:22:12.873148  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1010 18:22:12.873175  335513 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1010 18:22:12.888146  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1010 18:22:12.888174  335513 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1010 18:22:12.903848  335513 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1010 18:22:12.903872  335513 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1010 18:22:12.922065  335513 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1010 18:22:13.877182  335513 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 18:22:13.877224  335513 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 18:22:13.877242  335513 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1010 18:22:13.912048  335513 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1010 18:22:13.912094  335513 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1010 18:22:14.206894  335513 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1010 18:22:14.212024  335513 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 18:22:14.212069  335513 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 18:22:14.404024  335513 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.640853795s)
	I1010 18:22:14.404112  335513 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.629010505s)
	I1010 18:22:14.404217  335513 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.482117938s)
	I1010 18:22:14.406078  335513 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-121129 addons enable metrics-server
	
	I1010 18:22:14.415455  335513 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1010 18:22:14.416687  335513 addons.go:514] duration metric: took 1.885368042s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1010 18:22:14.706642  335513 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1010 18:22:14.710574  335513 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1010 18:22:14.710598  335513 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1010 18:22:15.206122  335513 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1010 18:22:15.211233  335513 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1010 18:22:15.212217  335513 api_server.go:141] control plane version: v1.34.1
	I1010 18:22:15.212245  335513 api_server.go:131] duration metric: took 2.506207886s to wait for apiserver health ...
	I1010 18:22:15.212254  335513 system_pods.go:43] waiting for kube-system pods to appear ...
	I1010 18:22:15.216033  335513 system_pods.go:59] 8 kube-system pods found
	I1010 18:22:15.216081  335513 system_pods.go:61] "coredns-66bc5c9577-bbxwj" [54b0d9c6-555f-476b-90d2-aca531478020] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1010 18:22:15.216094  335513 system_pods.go:61] "etcd-newest-cni-121129" [24b69503-efe0-4418-b656-58b90f7d7420] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1010 18:22:15.216110  335513 system_pods.go:61] "kindnet-9ml5n" [22d3f2b7-d65b-4c8e-a02f-58ead02d9794] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1010 18:22:15.216124  335513 system_pods.go:61] "kube-apiserver-newest-cni-121129" [c429c3d5-c663-453e-9d48-8eacc534ebf4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1010 18:22:15.216133  335513 system_pods.go:61] "kube-controller-manager-newest-cni-121129" [5e35e588-6a2a-414e-aea9-4d1d8b7897dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1010 18:22:15.216142  335513 system_pods.go:61] "kube-proxy-sw4cj" [82e9ec15-44c0-4bfd-8b16-3862f7bb01a6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1010 18:22:15.216147  335513 system_pods.go:61] "kube-scheduler-newest-cni-121129" [55bc4998-af60-4c82-a3cc-18ccc57ede90] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1010 18:22:15.216160  335513 system_pods.go:61] "storage-provisioner" [c4cb75b4-5b40-4243-b3df-fd256cb036f9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1010 18:22:15.216168  335513 system_pods.go:74] duration metric: took 3.909666ms to wait for pod list to return data ...
	I1010 18:22:15.216178  335513 default_sa.go:34] waiting for default service account to be created ...
	I1010 18:22:15.218489  335513 default_sa.go:45] found service account: "default"
	I1010 18:22:15.218507  335513 default_sa.go:55] duration metric: took 2.324261ms for default service account to be created ...
	I1010 18:22:15.218517  335513 kubeadm.go:586] duration metric: took 2.68723566s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1010 18:22:15.218530  335513 node_conditions.go:102] verifying NodePressure condition ...
	I1010 18:22:15.220763  335513 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1010 18:22:15.220790  335513 node_conditions.go:123] node cpu capacity is 8
	I1010 18:22:15.220807  335513 node_conditions.go:105] duration metric: took 2.269966ms to run NodePressure ...
	I1010 18:22:15.220826  335513 start.go:241] waiting for startup goroutines ...
	I1010 18:22:15.220838  335513 start.go:246] waiting for cluster config update ...
	I1010 18:22:15.220851  335513 start.go:255] writing updated cluster config ...
	I1010 18:22:15.221177  335513 ssh_runner.go:195] Run: rm -f paused
	I1010 18:22:15.271095  335513 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1010 18:22:15.273221  335513 out.go:179] * Done! kubectl is now configured to use "newest-cni-121129" cluster and "default" namespace by default
	W1010 18:22:13.657932  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	W1010 18:22:15.658631  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	W1010 18:22:17.658663  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	W1010 18:22:20.158567  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	W1010 18:22:22.657896  325699 pod_ready.go:104] pod "coredns-66bc5c9577-wrz5v" is not "Ready", error: <nil>
	I1010 18:22:23.158303  325699 pod_ready.go:94] pod "coredns-66bc5c9577-wrz5v" is "Ready"
	I1010 18:22:23.158329  325699 pod_ready.go:86] duration metric: took 36.005628615s for pod "coredns-66bc5c9577-wrz5v" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:22:23.160886  325699 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:22:23.165078  325699 pod_ready.go:94] pod "etcd-default-k8s-diff-port-821769" is "Ready"
	I1010 18:22:23.165100  325699 pod_ready.go:86] duration metric: took 4.192959ms for pod "etcd-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:22:23.167146  325699 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:22:23.170964  325699 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-821769" is "Ready"
	I1010 18:22:23.170991  325699 pod_ready.go:86] duration metric: took 3.822481ms for pod "kube-apiserver-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:22:23.172939  325699 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:22:23.356997  325699 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-821769" is "Ready"
	I1010 18:22:23.357025  325699 pod_ready.go:86] duration metric: took 184.065616ms for pod "kube-controller-manager-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:22:23.556347  325699 pod_ready.go:83] waiting for pod "kube-proxy-h2mzf" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:22:23.956188  325699 pod_ready.go:94] pod "kube-proxy-h2mzf" is "Ready"
	I1010 18:22:23.956226  325699 pod_ready.go:86] duration metric: took 399.852289ms for pod "kube-proxy-h2mzf" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:22:24.156235  325699 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:22:24.556503  325699 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-821769" is "Ready"
	I1010 18:22:24.556533  325699 pod_ready.go:86] duration metric: took 400.272866ms for pod "kube-scheduler-default-k8s-diff-port-821769" in "kube-system" namespace to be "Ready" or be gone ...
	I1010 18:22:24.556547  325699 pod_ready.go:40] duration metric: took 37.408032694s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1010 18:22:24.600620  325699 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1010 18:22:24.602340  325699 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-821769" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 10 18:21:58 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:21:58.067778718Z" level=info msg="Started container" PID=1751 containerID=094dac5fc92f16e24f4a7aaa85692acb7aedde698c0d2659f5ef9db3424dff2d description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrzb2/dashboard-metrics-scraper id=50825bda-284a-4f80-9dd0-43fa94aa3fec name=/runtime.v1.RuntimeService/StartContainer sandboxID=701908876e25dff2aea789d1c3a91c24ca60f96ed87f721d50a492a14985f571
	Oct 10 18:21:59 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:21:59.03335925Z" level=info msg="Removing container: 7dabe06209f98be63856405b59477b58c65b4603d592ee988bdaf873d01115e8" id=e958084b-371d-4dda-840c-714c51fb5329 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 10 18:21:59 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:21:59.043385337Z" level=info msg="Removed container 7dabe06209f98be63856405b59477b58c65b4603d592ee988bdaf873d01115e8: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrzb2/dashboard-metrics-scraper" id=e958084b-371d-4dda-840c-714c51fb5329 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 10 18:22:14 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:14.955397471Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2b7c96f2-7bed-477b-8751-98de9d6984fa name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:22:14 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:14.956452829Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6f445e76-562d-4675-8e01-21ed506a70b6 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:22:14 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:14.957522449Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrzb2/dashboard-metrics-scraper" id=e0838f07-05eb-4e7e-8812-0c2589ae2fca name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:22:14 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:14.957755745Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:22:14 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:14.964139047Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:22:14 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:14.964832359Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:22:14 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:14.993222424Z" level=info msg="Created container 0cc6c1dc24c033825917b1d32a4d483387a1fca52afc3cd70fc26507c34a82dd: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrzb2/dashboard-metrics-scraper" id=e0838f07-05eb-4e7e-8812-0c2589ae2fca name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:22:14 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:14.993893549Z" level=info msg="Starting container: 0cc6c1dc24c033825917b1d32a4d483387a1fca52afc3cd70fc26507c34a82dd" id=aef295e4-5e78-4232-8f29-b2fb04a361b1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 10 18:22:14 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:14.995930736Z" level=info msg="Started container" PID=1761 containerID=0cc6c1dc24c033825917b1d32a4d483387a1fca52afc3cd70fc26507c34a82dd description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrzb2/dashboard-metrics-scraper id=aef295e4-5e78-4232-8f29-b2fb04a361b1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=701908876e25dff2aea789d1c3a91c24ca60f96ed87f721d50a492a14985f571
	Oct 10 18:22:15 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:15.076705795Z" level=info msg="Removing container: 094dac5fc92f16e24f4a7aaa85692acb7aedde698c0d2659f5ef9db3424dff2d" id=5b369dca-8257-463c-912e-314e029c1278 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 10 18:22:15 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:15.087403509Z" level=info msg="Removed container 094dac5fc92f16e24f4a7aaa85692acb7aedde698c0d2659f5ef9db3424dff2d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrzb2/dashboard-metrics-scraper" id=5b369dca-8257-463c-912e-314e029c1278 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 10 18:22:17 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:17.083987327Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=37ade6f9-6e79-4f27-8534-8ad715d917da name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:22:17 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:17.084919404Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=243f0b5a-90e7-4153-96d1-29e6d6c21628 name=/runtime.v1.ImageService/ImageStatus
	Oct 10 18:22:17 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:17.085952603Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=70b72fbf-a86f-4e09-b0ed-979026ac95da name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:22:17 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:17.086246712Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:22:17 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:17.091486788Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:22:17 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:17.091674829Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/02fb9ae758bdb3ea6d68361dfca5c556428442a4f62c197657c85c7dd8929577/merged/etc/passwd: no such file or directory"
	Oct 10 18:22:17 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:17.091712194Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/02fb9ae758bdb3ea6d68361dfca5c556428442a4f62c197657c85c7dd8929577/merged/etc/group: no such file or directory"
	Oct 10 18:22:17 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:17.091998282Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 10 18:22:17 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:17.121332507Z" level=info msg="Created container d3f4f58452fe6cad87bbeefc0306b06959951916beb1588e1284049f7b3f4f98: kube-system/storage-provisioner/storage-provisioner" id=70b72fbf-a86f-4e09-b0ed-979026ac95da name=/runtime.v1.RuntimeService/CreateContainer
	Oct 10 18:22:17 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:17.121930285Z" level=info msg="Starting container: d3f4f58452fe6cad87bbeefc0306b06959951916beb1588e1284049f7b3f4f98" id=3e79961d-df02-4801-89da-74c702c9aefa name=/runtime.v1.RuntimeService/StartContainer
	Oct 10 18:22:17 default-k8s-diff-port-821769 crio[568]: time="2025-10-10T18:22:17.124043485Z" level=info msg="Started container" PID=1775 containerID=d3f4f58452fe6cad87bbeefc0306b06959951916beb1588e1284049f7b3f4f98 description=kube-system/storage-provisioner/storage-provisioner id=3e79961d-df02-4801-89da-74c702c9aefa name=/runtime.v1.RuntimeService/StartContainer sandboxID=9eaed1350c29331901b74e3530739c6df8616d773aa02858b52dd37712ea35ba
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	d3f4f58452fe6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   9eaed1350c293       storage-provisioner                                    kube-system
	0cc6c1dc24c03       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           25 seconds ago      Exited              dashboard-metrics-scraper   2                   701908876e25d       dashboard-metrics-scraper-6ffb444bf9-mrzb2             kubernetes-dashboard
	f1349f3edaedc       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   17eba9836b269       kubernetes-dashboard-855c9754f9-mb49v                  kubernetes-dashboard
	d5dadd7d16f48       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   6b9f2301af5b8       coredns-66bc5c9577-wrz5v                               kube-system
	a820896e513ae       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   57f322bb066b4       busybox                                                default
	54882de88b25d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           54 seconds ago      Running             kube-proxy                  0                   021e5cf1806d8       kube-proxy-h2mzf                                       kube-system
	c70a052ca72d3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   9eaed1350c293       storage-provisioner                                    kube-system
	6fc01004fca02       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   3d7553e01993a       kindnet-4w475                                          kube-system
	1352ca41b0e76       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           57 seconds ago      Running             kube-scheduler              0                   77b5553db6ce8       kube-scheduler-default-k8s-diff-port-821769            kube-system
	2aeadcb9e03cc       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           57 seconds ago      Running             kube-apiserver              0                   b739aa6250254       kube-apiserver-default-k8s-diff-port-821769            kube-system
	6c6e229b2a831       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           57 seconds ago      Running             etcd                        0                   6581b50a8c49e       etcd-default-k8s-diff-port-821769                      kube-system
	c3f03c923ad68       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           57 seconds ago      Running             kube-controller-manager     0                   68d6bbeb4e21e       kube-controller-manager-default-k8s-diff-port-821769   kube-system
	
	
	==> coredns [d5dadd7d16f48731fcf9902bc7edb1c11a125db6a4169fdb24d901c1afb65224] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34713 - 48103 "HINFO IN 4567185946183888815.8088431449797919716. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.033529641s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-821769
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-821769
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ad692bf4ab89f0e135b80e730ae25010479ecc46
	                    minikube.k8s.io/name=default-k8s-diff-port-821769
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_10T18_20_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 10 Oct 2025 18:20:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-821769
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 10 Oct 2025 18:22:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 10 Oct 2025 18:22:36 +0000   Fri, 10 Oct 2025 18:20:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 10 Oct 2025 18:22:36 +0000   Fri, 10 Oct 2025 18:20:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 10 Oct 2025 18:22:36 +0000   Fri, 10 Oct 2025 18:20:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 10 Oct 2025 18:22:36 +0000   Fri, 10 Oct 2025 18:21:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-821769
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 6694834041ede3e9eb1b67e168e90e0c
	  System UUID:                41d605da-1886-46ad-9ac8-df71dd2b8693
	  Boot ID:                    830c8438-99e6-48ba-b543-66e651cad0c8
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-wrz5v                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-default-k8s-diff-port-821769                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-4w475                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-default-k8s-diff-port-821769             250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-821769    200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-h2mzf                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-default-k8s-diff-port-821769             100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-mrzb2              0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-mb49v                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 107s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  114s               kubelet          Node default-k8s-diff-port-821769 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s               kubelet          Node default-k8s-diff-port-821769 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s               kubelet          Node default-k8s-diff-port-821769 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           109s               node-controller  Node default-k8s-diff-port-821769 event: Registered Node default-k8s-diff-port-821769 in Controller
	  Normal  NodeReady                97s                kubelet          Node default-k8s-diff-port-821769 status is now: NodeReady
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 59s)  kubelet          Node default-k8s-diff-port-821769 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 59s)  kubelet          Node default-k8s-diff-port-821769 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 59s)  kubelet          Node default-k8s-diff-port-821769 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                node-controller  Node default-k8s-diff-port-821769 event: Registered Node default-k8s-diff-port-821769 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff da 95 0c 3e 92 2e 08 06
	[  +0.052845] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 a5 06 76 2d e3 08 06
	[ +11.354316] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff fa c6 ff 04 55 d6 08 06
	[  +7.101927] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e6 9b 73 27 8c 80 08 06
	[  +0.000350] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 76 a5 06 76 2d e3 08 06
	[  +6.287191] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 27 2d 28 d6 46 08 06
	[  +0.000293] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa c6 ff 04 55 d6 08 06
	[Oct10 18:19] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 8c 22 f6 6b cf 08 06
	[  +0.000370] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 1e 29 bf 13 20 f9 08 06
	[ +15.511156] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3e d6 74 aa 27 d0 08 06
	[  +0.008495] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 af 05 d4 db d1 08 06
	[Oct10 18:20] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 0b 54 33 52 4e 08 06
	[  +0.000597] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b2 af 05 d4 db d1 08 06
	
	
	==> etcd [6c6e229b2a8311cf4d60aad6c602e02c2923b5ba2309e536076e40579456e8e2] <==
	{"level":"warn","ts":"2025-10-10T18:21:44.786834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.799217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.822446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.829193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.838044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.846887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.858288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.867515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.875866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.885713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.896441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.905072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.913554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.923182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.930295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.938605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.945798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.953077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.960595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.967908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.974807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.988260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:44.999969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:45.007615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-10T18:21:45.068855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43936","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 18:22:41 up  1:05,  0 user,  load average: 3.47, 4.31, 2.95
	Linux default-k8s-diff-port-821769 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6fc01004fca02171293288225d03c012204cdc683fe6069b66f91de42b957e10] <==
	I1010 18:21:46.604680       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1010 18:21:46.605700       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1010 18:21:46.605896       1 main.go:148] setting mtu 1500 for CNI 
	I1010 18:21:46.605918       1 main.go:178] kindnetd IP family: "ipv4"
	I1010 18:21:46.605949       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-10T18:21:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1010 18:21:46.809481       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1010 18:21:46.904761       1 controller.go:381] "Waiting for informer caches to sync"
	I1010 18:21:46.905029       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1010 18:21:46.905641       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1010 18:21:47.306233       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1010 18:21:47.306282       1 metrics.go:72] Registering metrics
	I1010 18:21:47.306343       1 controller.go:711] "Syncing nftables rules"
	I1010 18:21:56.810136       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1010 18:21:56.810218       1 main.go:301] handling current node
	I1010 18:22:06.812897       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1010 18:22:06.812940       1 main.go:301] handling current node
	I1010 18:22:16.809636       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1010 18:22:16.809671       1 main.go:301] handling current node
	I1010 18:22:26.812698       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1010 18:22:26.812741       1 main.go:301] handling current node
	I1010 18:22:36.810413       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1010 18:22:36.810458       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2aeadcb9e03cc805af5eff4f1b521299f31e4d618387d10eef543b4e95787f70] <==
	I1010 18:21:45.535745       1 aggregator.go:171] initial CRD sync complete...
	I1010 18:21:45.535754       1 autoregister_controller.go:144] Starting autoregister controller
	I1010 18:21:45.535761       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1010 18:21:45.535768       1 cache.go:39] Caches are synced for autoregister controller
	I1010 18:21:45.535945       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1010 18:21:45.536187       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1010 18:21:45.543186       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1010 18:21:45.553191       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1010 18:21:45.556986       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1010 18:21:45.557046       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1010 18:21:45.567402       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1010 18:21:45.567434       1 policy_source.go:240] refreshing policies
	I1010 18:21:45.590536       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1010 18:21:45.864756       1 controller.go:667] quota admission added evaluator for: namespaces
	I1010 18:21:45.895568       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1010 18:21:45.917401       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1010 18:21:45.926655       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1010 18:21:45.938505       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1010 18:21:45.985896       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.186.132"}
	I1010 18:21:46.000604       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.108.181"}
	I1010 18:21:46.438800       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1010 18:21:48.986865       1 controller.go:667] quota admission added evaluator for: endpoints
	I1010 18:21:49.089910       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1010 18:21:49.436017       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1010 18:21:49.436017       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [c3f03c923ad6830325d9888fdf2ad9de25ac73298e25b5812f72951d65af2eec] <==
	I1010 18:21:48.862313       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1010 18:21:48.864631       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1010 18:21:48.867757       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1010 18:21:48.870658       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1010 18:21:48.872894       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1010 18:21:48.879727       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1010 18:21:48.880688       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1010 18:21:48.880696       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1010 18:21:48.880725       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1010 18:21:48.880879       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1010 18:21:48.881903       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1010 18:21:48.881977       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1010 18:21:48.881945       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1010 18:21:48.882164       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1010 18:21:48.882198       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1010 18:21:48.884398       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1010 18:21:48.884416       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1010 18:21:48.886937       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1010 18:21:48.888133       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1010 18:21:48.890288       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1010 18:21:48.892568       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1010 18:21:48.901011       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1010 18:21:48.901027       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1010 18:21:48.901036       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1010 18:21:48.904085       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [54882de88b25d351cee0feb4833af2c57b273bf1f3a3c88e36f676b1619686cb] <==
	I1010 18:21:46.403774       1 server_linux.go:53] "Using iptables proxy"
	I1010 18:21:46.464387       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1010 18:21:46.565977       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1010 18:21:46.566034       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1010 18:21:46.566188       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1010 18:21:46.594535       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1010 18:21:46.594587       1 server_linux.go:132] "Using iptables Proxier"
	I1010 18:21:46.600801       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1010 18:21:46.601201       1 server.go:527] "Version info" version="v1.34.1"
	I1010 18:21:46.601311       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 18:21:46.603361       1 config.go:309] "Starting node config controller"
	I1010 18:21:46.603380       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1010 18:21:46.603389       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1010 18:21:46.603535       1 config.go:200] "Starting service config controller"
	I1010 18:21:46.603547       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1010 18:21:46.603569       1 config.go:106] "Starting endpoint slice config controller"
	I1010 18:21:46.603574       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1010 18:21:46.603588       1 config.go:403] "Starting serviceCIDR config controller"
	I1010 18:21:46.603593       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1010 18:21:46.704252       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1010 18:21:46.704269       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1010 18:21:46.704298       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1352ca41b0e7626fbf6ee43638506dfab18bd157572e9128f411ac1c5ae54538] <==
	I1010 18:21:44.236987       1 serving.go:386] Generated self-signed cert in-memory
	W1010 18:21:45.470219       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1010 18:21:45.470322       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1010 18:21:45.470338       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1010 18:21:45.470348       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1010 18:21:45.514646       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1010 18:21:45.516830       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1010 18:21:45.520269       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1010 18:21:45.520307       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1010 18:21:45.521274       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1010 18:21:45.521367       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1010 18:21:45.620801       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 10 18:21:49 default-k8s-diff-port-821769 kubelet[728]: I1010 18:21:49.407978     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hkc9\" (UniqueName: \"kubernetes.io/projected/a8003814-45ef-4392-9b70-b82abb06ac1f-kube-api-access-7hkc9\") pod \"dashboard-metrics-scraper-6ffb444bf9-mrzb2\" (UID: \"a8003814-45ef-4392-9b70-b82abb06ac1f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrzb2"
	Oct 10 18:21:49 default-k8s-diff-port-821769 kubelet[728]: I1010 18:21:49.408003     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/25ca2305-7568-48a1-bd71-8dbb16bb832b-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-mb49v\" (UID: \"25ca2305-7568-48a1-bd71-8dbb16bb832b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mb49v"
	Oct 10 18:21:52 default-k8s-diff-port-821769 kubelet[728]: I1010 18:21:52.754665     728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 10 18:21:55 default-k8s-diff-port-821769 kubelet[728]: I1010 18:21:55.037081     728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-mb49v" podStartSLOduration=1.443374894 podStartE2EDuration="6.037041407s" podCreationTimestamp="2025-10-10 18:21:49 +0000 UTC" firstStartedPulling="2025-10-10 18:21:49.655416613 +0000 UTC m=+6.794791035" lastFinishedPulling="2025-10-10 18:21:54.249083141 +0000 UTC m=+11.388457548" observedRunningTime="2025-10-10 18:21:55.036381285 +0000 UTC m=+12.175755719" watchObservedRunningTime="2025-10-10 18:21:55.037041407 +0000 UTC m=+12.176415834"
	Oct 10 18:21:58 default-k8s-diff-port-821769 kubelet[728]: I1010 18:21:58.025283     728 scope.go:117] "RemoveContainer" containerID="7dabe06209f98be63856405b59477b58c65b4603d592ee988bdaf873d01115e8"
	Oct 10 18:21:59 default-k8s-diff-port-821769 kubelet[728]: I1010 18:21:59.031771     728 scope.go:117] "RemoveContainer" containerID="7dabe06209f98be63856405b59477b58c65b4603d592ee988bdaf873d01115e8"
	Oct 10 18:21:59 default-k8s-diff-port-821769 kubelet[728]: I1010 18:21:59.031981     728 scope.go:117] "RemoveContainer" containerID="094dac5fc92f16e24f4a7aaa85692acb7aedde698c0d2659f5ef9db3424dff2d"
	Oct 10 18:21:59 default-k8s-diff-port-821769 kubelet[728]: E1010 18:21:59.032212     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mrzb2_kubernetes-dashboard(a8003814-45ef-4392-9b70-b82abb06ac1f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrzb2" podUID="a8003814-45ef-4392-9b70-b82abb06ac1f"
	Oct 10 18:22:00 default-k8s-diff-port-821769 kubelet[728]: I1010 18:22:00.036493     728 scope.go:117] "RemoveContainer" containerID="094dac5fc92f16e24f4a7aaa85692acb7aedde698c0d2659f5ef9db3424dff2d"
	Oct 10 18:22:00 default-k8s-diff-port-821769 kubelet[728]: E1010 18:22:00.036679     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mrzb2_kubernetes-dashboard(a8003814-45ef-4392-9b70-b82abb06ac1f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrzb2" podUID="a8003814-45ef-4392-9b70-b82abb06ac1f"
	Oct 10 18:22:01 default-k8s-diff-port-821769 kubelet[728]: I1010 18:22:01.040079     728 scope.go:117] "RemoveContainer" containerID="094dac5fc92f16e24f4a7aaa85692acb7aedde698c0d2659f5ef9db3424dff2d"
	Oct 10 18:22:01 default-k8s-diff-port-821769 kubelet[728]: E1010 18:22:01.040811     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mrzb2_kubernetes-dashboard(a8003814-45ef-4392-9b70-b82abb06ac1f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrzb2" podUID="a8003814-45ef-4392-9b70-b82abb06ac1f"
	Oct 10 18:22:14 default-k8s-diff-port-821769 kubelet[728]: I1010 18:22:14.954820     728 scope.go:117] "RemoveContainer" containerID="094dac5fc92f16e24f4a7aaa85692acb7aedde698c0d2659f5ef9db3424dff2d"
	Oct 10 18:22:15 default-k8s-diff-port-821769 kubelet[728]: I1010 18:22:15.075325     728 scope.go:117] "RemoveContainer" containerID="094dac5fc92f16e24f4a7aaa85692acb7aedde698c0d2659f5ef9db3424dff2d"
	Oct 10 18:22:15 default-k8s-diff-port-821769 kubelet[728]: I1010 18:22:15.075625     728 scope.go:117] "RemoveContainer" containerID="0cc6c1dc24c033825917b1d32a4d483387a1fca52afc3cd70fc26507c34a82dd"
	Oct 10 18:22:15 default-k8s-diff-port-821769 kubelet[728]: E1010 18:22:15.075830     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mrzb2_kubernetes-dashboard(a8003814-45ef-4392-9b70-b82abb06ac1f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrzb2" podUID="a8003814-45ef-4392-9b70-b82abb06ac1f"
	Oct 10 18:22:17 default-k8s-diff-port-821769 kubelet[728]: I1010 18:22:17.083615     728 scope.go:117] "RemoveContainer" containerID="c70a052ca72d3fcf8221f750d50a9946693d80a13afebd88510aad7b927f385b"
	Oct 10 18:22:20 default-k8s-diff-port-821769 kubelet[728]: I1010 18:22:20.674368     728 scope.go:117] "RemoveContainer" containerID="0cc6c1dc24c033825917b1d32a4d483387a1fca52afc3cd70fc26507c34a82dd"
	Oct 10 18:22:20 default-k8s-diff-port-821769 kubelet[728]: E1010 18:22:20.674533     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mrzb2_kubernetes-dashboard(a8003814-45ef-4392-9b70-b82abb06ac1f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrzb2" podUID="a8003814-45ef-4392-9b70-b82abb06ac1f"
	Oct 10 18:22:34 default-k8s-diff-port-821769 kubelet[728]: I1010 18:22:34.956330     728 scope.go:117] "RemoveContainer" containerID="0cc6c1dc24c033825917b1d32a4d483387a1fca52afc3cd70fc26507c34a82dd"
	Oct 10 18:22:34 default-k8s-diff-port-821769 kubelet[728]: E1010 18:22:34.956531     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-mrzb2_kubernetes-dashboard(a8003814-45ef-4392-9b70-b82abb06ac1f)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-mrzb2" podUID="a8003814-45ef-4392-9b70-b82abb06ac1f"
	Oct 10 18:22:36 default-k8s-diff-port-821769 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 10 18:22:36 default-k8s-diff-port-821769 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 10 18:22:36 default-k8s-diff-port-821769 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 10 18:22:36 default-k8s-diff-port-821769 systemd[1]: kubelet.service: Consumed 1.675s CPU time.
	
	
	==> kubernetes-dashboard [f1349f3edaedc69a7aa332fe3f3662c37e7ed235777aff61417cc51c8e32a81e] <==
	2025/10/10 18:21:54 Using namespace: kubernetes-dashboard
	2025/10/10 18:21:54 Using in-cluster config to connect to apiserver
	2025/10/10 18:21:54 Using secret token for csrf signing
	2025/10/10 18:21:54 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/10 18:21:54 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/10 18:21:54 Successful initial request to the apiserver, version: v1.34.1
	2025/10/10 18:21:54 Generating JWE encryption key
	2025/10/10 18:21:54 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/10 18:21:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/10 18:21:55 Initializing JWE encryption key from synchronized object
	2025/10/10 18:21:55 Creating in-cluster Sidecar client
	2025/10/10 18:21:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/10 18:21:55 Serving insecurely on HTTP port: 9090
	2025/10/10 18:22:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/10 18:21:54 Starting overwatch
	
	
	==> storage-provisioner [c70a052ca72d3fcf8221f750d50a9946693d80a13afebd88510aad7b927f385b] <==
	I1010 18:21:46.361129       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1010 18:22:16.365509       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d3f4f58452fe6cad87bbeefc0306b06959951916beb1588e1284049f7b3f4f98] <==
	I1010 18:22:17.137343       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1010 18:22:17.145922       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1010 18:22:17.145964       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1010 18:22:17.148285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:22:20.603581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:22:24.864738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:22:28.462716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:22:31.516730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:22:34.538971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:22:34.545099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1010 18:22:34.545280       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1010 18:22:34.545345       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"acf2fe9b-472b-4115-89d9-0092fd7e1fc6", APIVersion:"v1", ResourceVersion:"631", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-821769_1aa1f45c-d1f7-4ddb-b271-91df6940e918 became leader
	I1010 18:22:34.545448       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-821769_1aa1f45c-d1f7-4ddb-b271-91df6940e918!
	W1010 18:22:34.547267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:22:34.550730       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1010 18:22:34.646267       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-821769_1aa1f45c-d1f7-4ddb-b271-91df6940e918!
	W1010 18:22:36.553962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:22:36.559342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:22:38.562453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:22:38.567153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:22:40.570982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1010 18:22:40.574865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-821769 -n default-k8s-diff-port-821769
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-821769 -n default-k8s-diff-port-821769: exit status 2 (307.365771ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-821769 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (5.59s)

                                                
                                    

Test pass (264/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 41.41
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 13.68
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.06
18 TestDownloadOnly/v1.34.1/DeleteAll 0.23
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 0.4
21 TestBinaryMirror 0.82
22 TestOffline 84.85
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 157.21
31 TestAddons/serial/GCPAuth/Namespaces 0.15
32 TestAddons/serial/GCPAuth/FakeCredentials 10.43
48 TestAddons/StoppedEnableDisable 16.66
49 TestCertOptions 24.53
50 TestCertExpiration 214.87
52 TestForceSystemdFlag 23.61
53 TestForceSystemdEnv 36.51
55 TestKVMDriverInstallOrUpdate 0.96
59 TestErrorSpam/setup 19.06
60 TestErrorSpam/start 0.61
61 TestErrorSpam/status 0.92
62 TestErrorSpam/pause 7.07
63 TestErrorSpam/unpause 5.1
64 TestErrorSpam/stop 2.52
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 38.29
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 6.37
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.6
76 TestFunctional/serial/CacheCmd/cache/add_local 1.95
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.49
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.1
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
84 TestFunctional/serial/ExtraConfig 88.67
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.24
87 TestFunctional/serial/LogsFileCmd 1.29
88 TestFunctional/serial/InvalidService 3.61
90 TestFunctional/parallel/ConfigCmd 0.35
91 TestFunctional/parallel/DashboardCmd 11.07
92 TestFunctional/parallel/DryRun 0.36
93 TestFunctional/parallel/InternationalLanguage 0.14
94 TestFunctional/parallel/StatusCmd 0.91
99 TestFunctional/parallel/AddonsCmd 0.13
100 TestFunctional/parallel/PersistentVolumeClaim 58.69
102 TestFunctional/parallel/SSHCmd 0.64
103 TestFunctional/parallel/CpCmd 1.85
104 TestFunctional/parallel/MySQL 46.8
105 TestFunctional/parallel/FileSync 0.28
106 TestFunctional/parallel/CertSync 1.66
110 TestFunctional/parallel/NodeLabels 0.06
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.65
114 TestFunctional/parallel/License 0.63
115 TestFunctional/parallel/Version/short 0.05
116 TestFunctional/parallel/Version/components 0.45
118 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
119 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
120 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.47
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 40.22
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
132 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
133 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
134 TestFunctional/parallel/ImageCommands/ImageListJson 0.2
135 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
136 TestFunctional/parallel/ImageCommands/ImageBuild 3.4
137 TestFunctional/parallel/ImageCommands/Setup 2
142 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
145 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
146 TestFunctional/parallel/ProfileCmd/profile_list 0.38
147 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
148 TestFunctional/parallel/MountCmd/any-port 6.58
149 TestFunctional/parallel/MountCmd/specific-port 2.04
150 TestFunctional/parallel/MountCmd/VerifyCleanup 1.81
151 TestFunctional/parallel/ServiceCmd/List 1.7
152 TestFunctional/parallel/ServiceCmd/JSONOutput 1.69
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 175.28
164 TestMultiControlPlane/serial/DeployApp 4.69
165 TestMultiControlPlane/serial/PingHostFromPods 0.96
166 TestMultiControlPlane/serial/AddWorkerNode 27.36
167 TestMultiControlPlane/serial/NodeLabels 0.06
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.87
169 TestMultiControlPlane/serial/CopyFile 16.33
170 TestMultiControlPlane/serial/StopSecondaryNode 19.73
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.71
172 TestMultiControlPlane/serial/RestartSecondaryNode 14.49
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.88
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 118.5
175 TestMultiControlPlane/serial/DeleteSecondaryNode 10.52
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.67
177 TestMultiControlPlane/serial/StopCluster 41.06
178 TestMultiControlPlane/serial/RestartCluster 51.55
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.68
180 TestMultiControlPlane/serial/AddSecondaryNode 76.44
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.88
185 TestJSONOutput/start/Command 42.12
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.15
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.2
210 TestKicCustomNetwork/create_custom_network 35.43
211 TestKicCustomNetwork/use_default_bridge_network 23.6
212 TestKicExistingNetwork 23.27
213 TestKicCustomSubnet 26.12
214 TestKicStaticIP 25.23
215 TestMainNoArgs 0.04
216 TestMinikubeProfile 47.92
219 TestMountStart/serial/StartWithMountFirst 6.81
220 TestMountStart/serial/VerifyMountFirst 0.25
221 TestMountStart/serial/StartWithMountSecond 6.04
222 TestMountStart/serial/VerifyMountSecond 0.25
223 TestMountStart/serial/DeleteFirst 1.69
224 TestMountStart/serial/VerifyMountPostDelete 0.25
225 TestMountStart/serial/Stop 1.24
226 TestMountStart/serial/RestartStopped 7.49
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 91.73
231 TestMultiNode/serial/DeployApp2Nodes 4.31
232 TestMultiNode/serial/PingHostFrom2Pods 0.66
233 TestMultiNode/serial/AddNode 23.98
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.66
236 TestMultiNode/serial/CopyFile 9.37
237 TestMultiNode/serial/StopNode 2.19
238 TestMultiNode/serial/StartAfterStop 7.3
239 TestMultiNode/serial/RestartKeepsNodes 76.79
240 TestMultiNode/serial/DeleteNode 5.2
241 TestMultiNode/serial/StopMultiNode 28.54
242 TestMultiNode/serial/RestartMultiNode 48.69
243 TestMultiNode/serial/ValidateNameConflict 23.28
248 TestPreload 94.14
250 TestScheduledStopUnix 98.52
253 TestInsufficientStorage 9.61
254 TestRunningBinaryUpgrade 47.49
256 TestKubernetesUpgrade 316.37
257 TestMissingContainerUpgrade 98.99
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
263 TestNoKubernetes/serial/StartWithK8s 36.06
268 TestNetworkPlugins/group/false 8.18
272 TestNoKubernetes/serial/StartWithStopK8s 28.89
273 TestNoKubernetes/serial/Start 5.31
274 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
275 TestNoKubernetes/serial/ProfileList 1.75
276 TestNoKubernetes/serial/Stop 1.27
277 TestNoKubernetes/serial/StartNoArgs 6.83
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.34
279 TestStoppedBinaryUpgrade/Setup 3.06
280 TestStoppedBinaryUpgrade/Upgrade 67.22
281 TestStoppedBinaryUpgrade/MinikubeLogs 0.93
290 TestPause/serial/Start 40.74
291 TestNetworkPlugins/group/auto/Start 70.62
292 TestPause/serial/SecondStartNoReconfiguration 6.68
293 TestNetworkPlugins/group/kindnet/Start 40.96
295 TestNetworkPlugins/group/calico/Start 47.49
296 TestNetworkPlugins/group/kindnet/ControllerPod 6.03
297 TestNetworkPlugins/group/auto/KubeletFlags 0.32
298 TestNetworkPlugins/group/auto/NetCatPod 10.25
299 TestNetworkPlugins/group/kindnet/KubeletFlags 0.39
300 TestNetworkPlugins/group/kindnet/NetCatPod 9.26
301 TestNetworkPlugins/group/auto/DNS 0.11
302 TestNetworkPlugins/group/auto/Localhost 0.09
303 TestNetworkPlugins/group/auto/HairPin 0.09
304 TestNetworkPlugins/group/kindnet/DNS 0.15
305 TestNetworkPlugins/group/kindnet/Localhost 0.11
306 TestNetworkPlugins/group/kindnet/HairPin 0.11
307 TestNetworkPlugins/group/calico/ControllerPod 6.01
308 TestNetworkPlugins/group/calico/KubeletFlags 0.32
309 TestNetworkPlugins/group/calico/NetCatPod 8.26
310 TestNetworkPlugins/group/calico/DNS 0.15
311 TestNetworkPlugins/group/calico/Localhost 0.1
312 TestNetworkPlugins/group/calico/HairPin 0.12
313 TestNetworkPlugins/group/custom-flannel/Start 51.1
314 TestNetworkPlugins/group/enable-default-cni/Start 42.3
315 TestNetworkPlugins/group/flannel/Start 48.51
316 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
317 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.18
318 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
319 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.19
320 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
321 TestNetworkPlugins/group/enable-default-cni/Localhost 0.09
322 TestNetworkPlugins/group/enable-default-cni/HairPin 0.09
323 TestNetworkPlugins/group/custom-flannel/DNS 0.11
324 TestNetworkPlugins/group/custom-flannel/Localhost 0.09
325 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
326 TestNetworkPlugins/group/flannel/ControllerPod 6.01
327 TestNetworkPlugins/group/bridge/Start 63.93
328 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
329 TestNetworkPlugins/group/flannel/NetCatPod 12.21
331 TestStartStop/group/old-k8s-version/serial/FirstStart 53.81
332 TestNetworkPlugins/group/flannel/DNS 0.12
333 TestNetworkPlugins/group/flannel/Localhost 0.09
334 TestNetworkPlugins/group/flannel/HairPin 0.09
336 TestStartStop/group/no-preload/serial/FirstStart 57.42
338 TestStartStop/group/embed-certs/serial/FirstStart 43.08
339 TestStartStop/group/old-k8s-version/serial/DeployApp 10.26
340 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
341 TestNetworkPlugins/group/bridge/NetCatPod 10.19
343 TestNetworkPlugins/group/bridge/DNS 0.13
344 TestNetworkPlugins/group/bridge/Localhost 0.1
345 TestNetworkPlugins/group/bridge/HairPin 0.09
346 TestStartStop/group/old-k8s-version/serial/Stop 16.09
347 TestStartStop/group/embed-certs/serial/DeployApp 9.22
348 TestStartStop/group/no-preload/serial/DeployApp 11.29
350 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
351 TestStartStop/group/old-k8s-version/serial/SecondStart 47.47
352 TestStartStop/group/embed-certs/serial/Stop 17.72
355 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 41.03
356 TestStartStop/group/no-preload/serial/Stop 16.26
357 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
358 TestStartStop/group/embed-certs/serial/SecondStart 53.32
359 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
360 TestStartStop/group/no-preload/serial/SecondStart 49.14
361 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.23
362 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
364 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
365 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.59
366 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
369 TestStartStop/group/newest-cni/serial/FirstStart 29.03
370 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
371 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
372 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
373 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 48.53
374 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
375 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
376 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.31
378 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
380 TestStartStop/group/newest-cni/serial/DeployApp 0
382 TestStartStop/group/newest-cni/serial/Stop 2.49
383 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
384 TestStartStop/group/newest-cni/serial/SecondStart 10.36
385 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
386 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
387 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
389 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
390 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
391 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
x
+
TestDownloadOnly/v1.28.0/json-events (41.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-493383 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-493383 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (41.414212688s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (41.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1010 17:29:50.269228    9354 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1010 17:29:50.269334    9354 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-493383
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-493383: exit status 85 (56.802642ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-493383 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-493383 │ jenkins │ v1.37.0 │ 10 Oct 25 17:29 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/10 17:29:08
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 17:29:08.895451    9366 out.go:360] Setting OutFile to fd 1 ...
	I1010 17:29:08.895681    9366 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:29:08.895691    9366 out.go:374] Setting ErrFile to fd 2...
	I1010 17:29:08.895696    9366 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:29:08.895897    9366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	W1010 17:29:08.896021    9366 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21724-5815/.minikube/config/config.json: open /home/jenkins/minikube-integration/21724-5815/.minikube/config/config.json: no such file or directory
	I1010 17:29:08.896598    9366 out.go:368] Setting JSON to true
	I1010 17:29:08.897548    9366 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":689,"bootTime":1760116660,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 17:29:08.897634    9366 start.go:141] virtualization: kvm guest
	I1010 17:29:08.899770    9366 out.go:99] [download-only-493383] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1010 17:29:08.899893    9366 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball: no such file or directory
	I1010 17:29:08.899959    9366 notify.go:220] Checking for updates...
	I1010 17:29:08.901312    9366 out.go:171] MINIKUBE_LOCATION=21724
	I1010 17:29:08.902642    9366 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 17:29:08.903793    9366 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 17:29:08.905022    9366 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-5815/.minikube
	I1010 17:29:08.906242    9366 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1010 17:29:08.908333    9366 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1010 17:29:08.908556    9366 driver.go:421] Setting default libvirt URI to qemu:///system
	I1010 17:29:08.932898    9366 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1010 17:29:08.932961    9366 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 17:29:09.267521    9366 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-10 17:29:09.256456185 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 17:29:09.267622    9366 docker.go:318] overlay module found
	I1010 17:29:09.269326    9366 out.go:99] Using the docker driver based on user configuration
	I1010 17:29:09.269360    9366 start.go:305] selected driver: docker
	I1010 17:29:09.269367    9366 start.go:925] validating driver "docker" against <nil>
	I1010 17:29:09.269477    9366 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 17:29:09.324206    9366 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-10 17:29:09.314899876 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 17:29:09.324342    9366 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1010 17:29:09.324799    9366 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1010 17:29:09.324946    9366 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1010 17:29:09.326602    9366 out.go:171] Using Docker driver with root privileges
	I1010 17:29:09.327590    9366 cni.go:84] Creating CNI manager for ""
	I1010 17:29:09.327644    9366 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 17:29:09.327653    9366 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1010 17:29:09.327707    9366 start.go:349] cluster config:
	{Name:download-only-493383 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-493383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 17:29:09.328873    9366 out.go:99] Starting "download-only-493383" primary control-plane node in "download-only-493383" cluster
	I1010 17:29:09.328894    9366 cache.go:123] Beginning downloading kic base image for docker with crio
	I1010 17:29:09.329931    9366 out.go:99] Pulling base image v0.0.48-1760103811-21724 ...
	I1010 17:29:09.329955    9366 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1010 17:29:09.330012    9366 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon
	I1010 17:29:09.346699    9366 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 to local cache
	I1010 17:29:09.346873    9366 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local cache directory
	I1010 17:29:09.346965    9366 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 to local cache
	I1010 17:29:09.436324    9366 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1010 17:29:09.436357    9366 cache.go:58] Caching tarball of preloaded images
	I1010 17:29:09.436533    9366 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1010 17:29:09.438220    9366 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1010 17:29:09.438240    9366 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1010 17:29:09.552660    9366 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1010 17:29:09.552796    9366 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1010 17:29:22.198678    9366 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1010 17:29:22.199003    9366 profile.go:143] Saving config to /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/download-only-493383/config.json ...
	I1010 17:29:22.199031    9366 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/download-only-493383/config.json: {Name:mk68386f4381addbfdb0cb98de1363530d493ba6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1010 17:29:22.199213    9366 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1010 17:29:22.199382    9366 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21724-5815/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-493383 host does not exist
	  To start a cluster, run: "minikube start -p download-only-493383"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-493383
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (13.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-963459 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-963459 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (13.682989885s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (13.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1010 17:30:04.354428    9354 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1010 17:30:04.354474    9354 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-963459
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-963459: exit status 85 (59.954498ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-493383 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-493383 │ jenkins │ v1.37.0 │ 10 Oct 25 17:29 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 10 Oct 25 17:29 UTC │ 10 Oct 25 17:29 UTC │
	│ delete  │ -p download-only-493383                                                                                                                                                   │ download-only-493383 │ jenkins │ v1.37.0 │ 10 Oct 25 17:29 UTC │ 10 Oct 25 17:29 UTC │
	│ start   │ -o=json --download-only -p download-only-963459 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-963459 │ jenkins │ v1.37.0 │ 10 Oct 25 17:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/10 17:29:50
	Running on machine: ubuntu-20-agent-11
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1010 17:29:50.710720    9843 out.go:360] Setting OutFile to fd 1 ...
	I1010 17:29:50.710961    9843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:29:50.710970    9843 out.go:374] Setting ErrFile to fd 2...
	I1010 17:29:50.710974    9843 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:29:50.711206    9843 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 17:29:50.711703    9843 out.go:368] Setting JSON to true
	I1010 17:29:50.712489    9843 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":731,"bootTime":1760116660,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 17:29:50.712565    9843 start.go:141] virtualization: kvm guest
	I1010 17:29:50.714252    9843 out.go:99] [download-only-963459] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1010 17:29:50.714375    9843 notify.go:220] Checking for updates...
	I1010 17:29:50.715641    9843 out.go:171] MINIKUBE_LOCATION=21724
	I1010 17:29:50.716775    9843 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 17:29:50.717849    9843 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 17:29:50.718894    9843 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-5815/.minikube
	I1010 17:29:50.719918    9843 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1010 17:29:50.721628    9843 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1010 17:29:50.721910    9843 driver.go:421] Setting default libvirt URI to qemu:///system
	I1010 17:29:50.744646    9843 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1010 17:29:50.744769    9843 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 17:29:50.803473    9843 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-10 17:29:50.79298652 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 17:29:50.803591    9843 docker.go:318] overlay module found
	I1010 17:29:50.805099    9843 out.go:99] Using the docker driver based on user configuration
	I1010 17:29:50.805120    9843 start.go:305] selected driver: docker
	I1010 17:29:50.805125    9843 start.go:925] validating driver "docker" against <nil>
	I1010 17:29:50.805194    9843 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 17:29:50.861655    9843 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:46 SystemTime:2025-10-10 17:29:50.852381992 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 17:29:50.861844    9843 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1010 17:29:50.862340    9843 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1010 17:29:50.862477    9843 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1010 17:29:50.864038    9843 out.go:171] Using Docker driver with root privileges
	I1010 17:29:50.865354    9843 cni.go:84] Creating CNI manager for ""
	I1010 17:29:50.865424    9843 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1010 17:29:50.865439    9843 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1010 17:29:50.865505    9843 start.go:349] cluster config:
	{Name:download-only-963459 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-963459 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 17:29:50.866508    9843 out.go:99] Starting "download-only-963459" primary control-plane node in "download-only-963459" cluster
	I1010 17:29:50.866537    9843 cache.go:123] Beginning downloading kic base image for docker with crio
	I1010 17:29:50.867490    9843 out.go:99] Pulling base image v0.0.48-1760103811-21724 ...
	I1010 17:29:50.867511    9843 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 17:29:50.867620    9843 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local docker daemon
	I1010 17:29:50.884096    9843 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 to local cache
	I1010 17:29:50.884224    9843 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local cache directory
	I1010 17:29:50.884242    9843 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 in local cache directory, skipping pull
	I1010 17:29:50.884248    9843 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 exists in cache, skipping pull
	I1010 17:29:50.884256    9843 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 as a tarball
	I1010 17:29:50.974144    9843 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1010 17:29:50.974172    9843 cache.go:58] Caching tarball of preloaded images
	I1010 17:29:50.974376    9843 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1010 17:29:50.976031    9843 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1010 17:29:50.976065    9843 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1010 17:29:51.092360    9843 preload.go:290] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1010 17:29:51.092416    9843 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21724-5815/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-963459 host does not exist
	  To start a cluster, run: "minikube start -p download-only-963459"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-963459
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.4s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-494179 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-494179" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-494179
--- PASS: TestDownloadOnlyKic (0.40s)

                                                
                                    
x
+
TestBinaryMirror (0.82s)

                                                
                                                
=== RUN   TestBinaryMirror
I1010 17:30:05.444858    9354 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-717710 --alsologtostderr --binary-mirror http://127.0.0.1:39877 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-717710" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-717710
--- PASS: TestBinaryMirror (0.82s)

                                                
                                    
x
+
TestOffline (84.85s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-416783 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-416783 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m22.262832355s)
helpers_test.go:175: Cleaning up "offline-crio-416783" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-416783
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-416783: (2.588791451s)
--- PASS: TestOffline (84.85s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-594989
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-594989: exit status 85 (50.174214ms)

                                                
                                                
-- stdout --
	* Profile "addons-594989" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-594989"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-594989
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-594989: exit status 85 (54.097478ms)

                                                
                                                
-- stdout --
	* Profile "addons-594989" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-594989"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (157.21s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-594989 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-594989 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m37.209954702s)
--- PASS: TestAddons/Setup (157.21s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-594989 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-594989 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.43s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-594989 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-594989 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b556e7f6-46e6-40e5-9826-18498598bc80] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b556e7f6-46e6-40e5-9826-18498598bc80] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003804576s
addons_test.go:694: (dbg) Run:  kubectl --context addons-594989 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-594989 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-594989 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.43s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.66s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-594989
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-594989: (16.423617602s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-594989
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-594989
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-594989
--- PASS: TestAddons/StoppedEnableDisable (16.66s)

                                                
                                    
x
+
TestCertOptions (24.53s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-594273 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-594273 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (21.085461836s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-594273 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-594273 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-594273 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-594273" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-594273
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-594273: (2.767760805s)
--- PASS: TestCertOptions (24.53s)

                                                
                                    
x
+
TestCertExpiration (214.87s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-770491 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-770491 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (26.519772577s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-770491 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-770491 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (5.873347708s)
helpers_test.go:175: Cleaning up "cert-expiration-770491" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-770491
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-770491: (2.477536899s)
--- PASS: TestCertExpiration (214.87s)

                                                
                                    
x
+
TestForceSystemdFlag (23.61s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-201078 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-201078 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (20.974240511s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-201078 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-201078" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-201078
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-201078: (2.367421599s)
--- PASS: TestForceSystemdFlag (23.61s)

                                                
                                    
x
+
TestForceSystemdEnv (36.51s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-518163 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-518163 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.926059734s)
helpers_test.go:175: Cleaning up "force-systemd-env-518163" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-518163
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-518163: (2.586251303s)
--- PASS: TestForceSystemdEnv (36.51s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.96s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1010 18:13:57.524073    9354 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1010 18:13:57.524248    9354 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate761973017/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1010 18:13:57.552600    9354 install.go:163] /tmp/TestKVMDriverInstallOrUpdate761973017/001/docker-machine-driver-kvm2 version is 1.1.1
W1010 18:13:57.552634    9354 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1010 18:13:57.552753    9354 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1010 18:13:57.552797    9354 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate761973017/001/docker-machine-driver-kvm2
I1010 18:13:58.344301    9354 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate761973017/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1010 18:13:58.359954    9354 install.go:163] /tmp/TestKVMDriverInstallOrUpdate761973017/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (0.96s)

                                                
                                    
x
+
TestErrorSpam/setup (19.06s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-088076 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-088076 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-088076 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-088076 --driver=docker  --container-runtime=crio: (19.062231895s)
--- PASS: TestErrorSpam/setup (19.06s)

                                                
                                    
x
+
TestErrorSpam/start (0.61s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088076 --log_dir /tmp/nospam-088076 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088076 --log_dir /tmp/nospam-088076 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088076 --log_dir /tmp/nospam-088076 start --dry-run
--- PASS: TestErrorSpam/start (0.61s)

                                                
                                    
x
+
TestErrorSpam/status (0.92s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088076 --log_dir /tmp/nospam-088076 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088076 --log_dir /tmp/nospam-088076 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088076 --log_dir /tmp/nospam-088076 status
--- PASS: TestErrorSpam/status (0.92s)

                                                
                                    
x
+
TestErrorSpam/pause (7.07s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088076 --log_dir /tmp/nospam-088076 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-088076 --log_dir /tmp/nospam-088076 pause: exit status 80 (2.370367102s)

                                                
                                                
-- stdout --
	* Pausing node nospam-088076 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:36:19Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-088076 --log_dir /tmp/nospam-088076 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088076 --log_dir /tmp/nospam-088076 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-088076 --log_dir /tmp/nospam-088076 pause: exit status 80 (2.312408338s)

                                                
                                                
-- stdout --
	* Pausing node nospam-088076 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:36:21Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-088076 --log_dir /tmp/nospam-088076 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088076 --log_dir /tmp/nospam-088076 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-088076 --log_dir /tmp/nospam-088076 pause: exit status 80 (2.38213614s)

                                                
                                                
-- stdout --
	* Pausing node nospam-088076 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:36:24Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-088076 --log_dir /tmp/nospam-088076 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (7.07s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.1s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088076 --log_dir /tmp/nospam-088076 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-088076 --log_dir /tmp/nospam-088076 unpause: exit status 80 (1.646469691s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-088076 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:36:25Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-088076 --log_dir /tmp/nospam-088076 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088076 --log_dir /tmp/nospam-088076 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-088076 --log_dir /tmp/nospam-088076 unpause: exit status 80 (1.886199459s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-088076 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:36:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-088076 --log_dir /tmp/nospam-088076 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088076 --log_dir /tmp/nospam-088076 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-088076 --log_dir /tmp/nospam-088076 unpause: exit status 80 (1.565200247s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-088076 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-10T17:36:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-088076 --log_dir /tmp/nospam-088076 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.10s)

                                                
                                    
x
+
TestErrorSpam/stop (2.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088076 --log_dir /tmp/nospam-088076 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-088076 --log_dir /tmp/nospam-088076 stop: (2.342542201s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088076 --log_dir /tmp/nospam-088076 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-088076 --log_dir /tmp/nospam-088076 stop
--- PASS: TestErrorSpam/stop (2.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21724-5815/.minikube/files/etc/test/nested/copy/9354/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (38.29s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-728643 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-728643 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (38.289696412s)
--- PASS: TestFunctional/serial/StartWithProxy (38.29s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.37s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1010 17:37:14.571600    9354 config.go:182] Loaded profile config "functional-728643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-728643 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-728643 --alsologtostderr -v=8: (6.36796008s)
functional_test.go:678: soft start took 6.368703411s for "functional-728643" cluster.
I1010 17:37:20.939881    9354 config.go:182] Loaded profile config "functional-728643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.37s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-728643 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.95s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-728643 /tmp/TestFunctionalserialCacheCmdcacheadd_local4021164907/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 cache add minikube-local-cache-test:functional-728643
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-728643 cache add minikube-local-cache-test:functional-728643: (1.616984636s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 cache delete minikube-local-cache-test:functional-728643
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-728643
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.95s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728643 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (273.719473ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 kubectl -- --context functional-728643 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-728643 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (88.67s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-728643 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1010 17:37:44.105037    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 17:37:44.116943    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 17:37:44.128476    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 17:37:44.149860    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 17:37:44.191582    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 17:37:44.273635    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 17:37:44.435126    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 17:37:44.756753    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 17:37:45.398275    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 17:37:46.679893    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 17:37:49.242802    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 17:37:54.364314    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 17:38:04.606326    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 17:38:25.088521    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-728643 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m28.667191354s)
functional_test.go:776: restart took 1m28.667317229s for "functional-728643" cluster.
I1010 17:38:56.457903    9354 config.go:182] Loaded profile config "functional-728643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (88.67s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-728643 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-728643 logs: (1.241602297s)
--- PASS: TestFunctional/serial/LogsCmd (1.24s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 logs --file /tmp/TestFunctionalserialLogsFileCmd1578842024/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-728643 logs --file /tmp/TestFunctionalserialLogsFileCmd1578842024/001/logs.txt: (1.286931906s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.61s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-728643 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-728643
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-728643: exit status 115 (352.18871ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31131 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-728643 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.61s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728643 config get cpus: exit status 14 (68.852699ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728643 config get cpus: exit status 14 (55.058374ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-728643 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-728643 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 47604: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.07s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-728643 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-728643 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (149.840704ms)

                                                
                                                
-- stdout --
	* [functional-728643] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-5815/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-5815/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 17:40:03.537281   47210 out.go:360] Setting OutFile to fd 1 ...
	I1010 17:40:03.537545   47210 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:40:03.537556   47210 out.go:374] Setting ErrFile to fd 2...
	I1010 17:40:03.537561   47210 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:40:03.537792   47210 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 17:40:03.538298   47210 out.go:368] Setting JSON to false
	I1010 17:40:03.539325   47210 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1343,"bootTime":1760116660,"procs":240,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 17:40:03.539418   47210 start.go:141] virtualization: kvm guest
	I1010 17:40:03.542003   47210 out.go:179] * [functional-728643] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1010 17:40:03.543299   47210 out.go:179]   - MINIKUBE_LOCATION=21724
	I1010 17:40:03.543301   47210 notify.go:220] Checking for updates...
	I1010 17:40:03.544616   47210 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 17:40:03.545792   47210 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 17:40:03.547089   47210 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-5815/.minikube
	I1010 17:40:03.548321   47210 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 17:40:03.549480   47210 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 17:40:03.550929   47210 config.go:182] Loaded profile config "functional-728643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:40:03.551450   47210 driver.go:421] Setting default libvirt URI to qemu:///system
	I1010 17:40:03.575681   47210 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1010 17:40:03.575848   47210 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 17:40:03.633331   47210 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-10 17:40:03.623269145 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 17:40:03.633440   47210 docker.go:318] overlay module found
	I1010 17:40:03.635285   47210 out.go:179] * Using the docker driver based on existing profile
	I1010 17:40:03.636531   47210 start.go:305] selected driver: docker
	I1010 17:40:03.636545   47210 start.go:925] validating driver "docker" against &{Name:functional-728643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-728643 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 17:40:03.636705   47210 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 17:40:03.638581   47210 out.go:203] 
	W1010 17:40:03.639789   47210 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1010 17:40:03.640787   47210 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-728643 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-728643 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-728643 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (143.251324ms)

                                                
                                                
-- stdout --
	* [functional-728643] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-5815/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-5815/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 17:40:11.729673   49455 out.go:360] Setting OutFile to fd 1 ...
	I1010 17:40:11.729909   49455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:40:11.729919   49455 out.go:374] Setting ErrFile to fd 2...
	I1010 17:40:11.729923   49455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:40:11.730245   49455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 17:40:11.730644   49455 out.go:368] Setting JSON to false
	I1010 17:40:11.731736   49455 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":1352,"bootTime":1760116660,"procs":251,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 17:40:11.731850   49455 start.go:141] virtualization: kvm guest
	I1010 17:40:11.733823   49455 out.go:179] * [functional-728643] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1010 17:40:11.735024   49455 out.go:179]   - MINIKUBE_LOCATION=21724
	I1010 17:40:11.735030   49455 notify.go:220] Checking for updates...
	I1010 17:40:11.737597   49455 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 17:40:11.738806   49455 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 17:40:11.739930   49455 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-5815/.minikube
	I1010 17:40:11.740924   49455 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 17:40:11.741886   49455 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 17:40:11.743192   49455 config.go:182] Loaded profile config "functional-728643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:40:11.743695   49455 driver.go:421] Setting default libvirt URI to qemu:///system
	I1010 17:40:11.765909   49455 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1010 17:40:11.766009   49455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 17:40:11.819932   49455 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-10-10 17:40:11.809766984 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 17:40:11.820035   49455 docker.go:318] overlay module found
	I1010 17:40:11.822639   49455 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1010 17:40:11.823816   49455 start.go:305] selected driver: docker
	I1010 17:40:11.823831   49455 start.go:925] validating driver "docker" against &{Name:functional-728643 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760103811-21724@sha256:1e1c3270d119068e387852e35302ede0e311c2efc8f77760c3a85607402294f6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-728643 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1010 17:40:11.823933   49455 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 17:40:11.825758   49455 out.go:203] 
	W1010 17:40:11.827082   49455 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1010 17:40:11.828128   49455 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (58.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [4823671e-33ba-4101-bf8a-b51dfc5f3b63] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.002808997s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-728643 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-728643 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-728643 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-728643 apply -f testdata/storage-provisioner/pod.yaml
I1010 17:39:11.155983    9354 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [4feb60be-f18d-45d6-bd9b-d49e03df6765] Pending: PodScheduled:Unschedulable (0/1 nodes are available: persistentvolumeclaim "myclaim" not found. not found)
helpers_test.go:352: "sp-pod" [4feb60be-f18d-45d6-bd9b-d49e03df6765] Pending: PodScheduled:Unschedulable (0/1 nodes are available: persistentvolume "pvc-60085b2f-7a87-4bd3-90f1-5f633ae0543f" not found. not found)
helpers_test.go:352: "sp-pod" [4feb60be-f18d-45d6-bd9b-d49e03df6765] Pending
helpers_test.go:352: "sp-pod" [4feb60be-f18d-45d6-bd9b-d49e03df6765] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [4feb60be-f18d-45d6-bd9b-d49e03df6765] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 44.003390597s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-728643 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-728643 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-728643 delete -f testdata/storage-provisioner/pod.yaml: (1.047537513s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-728643 apply -f testdata/storage-provisioner/pod.yaml
I1010 17:39:56.403790    9354 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [b27dc68c-046d-4319-8416-600ca4ae3985] Pending
helpers_test.go:352: "sp-pod" [b27dc68c-046d-4319-8416-600ca4ae3985] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [b27dc68c-046d-4319-8416-600ca4ae3985] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003192421s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-728643 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (58.69s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh -n functional-728643 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 cp functional-728643:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3766938193/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh -n functional-728643 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh -n functional-728643 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (46.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-728643 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
E1010 17:39:06.049984    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "mysql-5bb876957f-xb8cr" [9169a164-41b2-4427-b529-fd9d8d43f2c5] Pending
helpers_test.go:352: "mysql-5bb876957f-xb8cr" [9169a164-41b2-4427-b529-fd9d8d43f2c5] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-xb8cr" [9169a164-41b2-4427-b529-fd9d8d43f2c5] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 45.003223053s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-728643 exec mysql-5bb876957f-xb8cr -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-728643 exec mysql-5bb876957f-xb8cr -- mysql -ppassword -e "show databases;": exit status 1 (87.438659ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1010 17:39:50.846324    9354 retry.go:31] will retry after 1.464498525s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-728643 exec mysql-5bb876957f-xb8cr -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (46.80s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9354/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh "sudo cat /etc/test/nested/copy/9354/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9354.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh "sudo cat /etc/ssl/certs/9354.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9354.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh "sudo cat /usr/share/ca-certificates/9354.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/93542.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh "sudo cat /etc/ssl/certs/93542.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/93542.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh "sudo cat /usr/share/ca-certificates/93542.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-728643 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728643 ssh "sudo systemctl is-active docker": exit status 1 (326.852687ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728643 ssh "sudo systemctl is-active containerd": exit status 1 (321.765121ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-728643 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-728643 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-728643 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 42306: os: process already finished
helpers_test.go:519: unable to terminate pid 41939: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-728643 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-728643 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (40.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-728643 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [6dc05cec-39bf-40e4-8d63-de0b9fd3b972] Pending
helpers_test.go:352: "nginx-svc" [6dc05cec-39bf-40e4-8d63-de0b9fd3b972] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [6dc05cec-39bf-40e4-8d63-de0b9fd3b972] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 40.003347745s
I1010 17:39:44.000123    9354 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (40.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-728643 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.27.167 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-728643 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-728643 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-728643 image ls --format short --alsologtostderr:
I1010 17:40:12.753256   49968 out.go:360] Setting OutFile to fd 1 ...
I1010 17:40:12.753503   49968 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1010 17:40:12.753512   49968 out.go:374] Setting ErrFile to fd 2...
I1010 17:40:12.753516   49968 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1010 17:40:12.753713   49968 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
I1010 17:40:12.754273   49968 config.go:182] Loaded profile config "functional-728643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1010 17:40:12.754357   49968 config.go:182] Loaded profile config "functional-728643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1010 17:40:12.754704   49968 cli_runner.go:164] Run: docker container inspect functional-728643 --format={{.State.Status}}
I1010 17:40:12.773441   49968 ssh_runner.go:195] Run: systemctl --version
I1010 17:40:12.773502   49968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-728643
I1010 17:40:12.790735   49968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/functional-728643/id_rsa Username:docker}
I1010 17:40:12.886750   49968 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-728643 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/nginx                 │ alpine             │ 5e7abcdd20216 │ 54.2MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/library/nginx                 │ latest             │ 07ccdb7838758 │ 164MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-728643 image ls --format table --alsologtostderr:
I1010 17:40:15.164319   50306 out.go:360] Setting OutFile to fd 1 ...
I1010 17:40:15.164524   50306 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1010 17:40:15.164532   50306 out.go:374] Setting ErrFile to fd 2...
I1010 17:40:15.164536   50306 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1010 17:40:15.164721   50306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
I1010 17:40:15.165256   50306 config.go:182] Loaded profile config "functional-728643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1010 17:40:15.165347   50306 config.go:182] Loaded profile config "functional-728643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1010 17:40:15.165699   50306 cli_runner.go:164] Run: docker container inspect functional-728643 --format={{.State.Status}}
I1010 17:40:15.183192   50306 ssh_runner.go:195] Run: systemctl --version
I1010 17:40:15.183238   50306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-728643
I1010 17:40:15.200455   50306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/functional-728643/id_rsa Username:docker}
I1010 17:40:15.295712   50306 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-728643 image ls --format json --alsologtostderr:
[{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba08055
8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acf
f9babff938","repoDigests":["docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115","docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6"],"repoTags":["docker.io/library/nginx:latest"],"size":"163615579"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manag
er:v1.34.1"],"size":"76004181"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe
0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5","repoDigests":["docker.io/library/nginx@sha256:7c1b9a91514d1eb5288d7cd6e91d9f451707911bfaea9307a3acbc811d4aa82e","docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54168570"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"52546a367cc9e0d924aa3b190596a916
7fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1
.34.1"],"size":"89046001"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-728643 image ls --format json --alsologtostderr:
I1010 17:40:14.958122   50253 out.go:360] Setting OutFile to fd 1 ...
I1010 17:40:14.958364   50253 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1010 17:40:14.958373   50253 out.go:374] Setting ErrFile to fd 2...
I1010 17:40:14.958378   50253 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1010 17:40:14.958561   50253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
I1010 17:40:14.959106   50253 config.go:182] Loaded profile config "functional-728643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1010 17:40:14.959191   50253 config.go:182] Loaded profile config "functional-728643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1010 17:40:14.959557   50253 cli_runner.go:164] Run: docker container inspect functional-728643 --format={{.State.Status}}
I1010 17:40:14.976737   50253 ssh_runner.go:195] Run: systemctl --version
I1010 17:40:14.976773   50253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-728643
I1010 17:40:14.993814   50253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/functional-728643/id_rsa Username:docker}
I1010 17:40:15.088649   50253 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-728643 image ls --format yaml --alsologtostderr:
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5
repoDigests:
- docker.io/library/nginx@sha256:7c1b9a91514d1eb5288d7cd6e91d9f451707911bfaea9307a3acbc811d4aa82e
- docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e
repoTags:
- docker.io/library/nginx:alpine
size: "54168570"
- id: 07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938
repoDigests:
- docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
repoTags:
- docker.io/library/nginx:latest
size: "163615579"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-728643 image ls --format yaml --alsologtostderr:
I1010 17:40:12.964824   50025 out.go:360] Setting OutFile to fd 1 ...
I1010 17:40:12.965233   50025 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1010 17:40:12.965248   50025 out.go:374] Setting ErrFile to fd 2...
I1010 17:40:12.965255   50025 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1010 17:40:12.965616   50025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
I1010 17:40:12.966238   50025 config.go:182] Loaded profile config "functional-728643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1010 17:40:12.966360   50025 config.go:182] Loaded profile config "functional-728643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1010 17:40:12.966764   50025 cli_runner.go:164] Run: docker container inspect functional-728643 --format={{.State.Status}}
I1010 17:40:12.984595   50025 ssh_runner.go:195] Run: systemctl --version
I1010 17:40:12.984658   50025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-728643
I1010 17:40:13.001537   50025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/functional-728643/id_rsa Username:docker}
I1010 17:40:13.096700   50025 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728643 ssh pgrep buildkitd: exit status 1 (250.005963ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 image build -t localhost/my-image:functional-728643 testdata/build --alsologtostderr
2025/10/10 17:40:14 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-728643 image build -t localhost/my-image:functional-728643 testdata/build --alsologtostderr: (2.934316178s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-728643 image build -t localhost/my-image:functional-728643 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 1637e3e599f
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-728643
--> 6dfb7a1afce
Successfully tagged localhost/my-image:functional-728643
6dfb7a1afceb7d22a819db4c45af495a46b02fb69f3d4a401e07066137290d45
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-728643 image build -t localhost/my-image:functional-728643 testdata/build --alsologtostderr:
I1010 17:40:13.424295   50183 out.go:360] Setting OutFile to fd 1 ...
I1010 17:40:13.424562   50183 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1010 17:40:13.424571   50183 out.go:374] Setting ErrFile to fd 2...
I1010 17:40:13.424575   50183 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1010 17:40:13.424779   50183 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
I1010 17:40:13.425378   50183 config.go:182] Loaded profile config "functional-728643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1010 17:40:13.426048   50183 config.go:182] Loaded profile config "functional-728643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1010 17:40:13.426501   50183 cli_runner.go:164] Run: docker container inspect functional-728643 --format={{.State.Status}}
I1010 17:40:13.444241   50183 ssh_runner.go:195] Run: systemctl --version
I1010 17:40:13.444298   50183 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-728643
I1010 17:40:13.461918   50183 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/functional-728643/id_rsa Username:docker}
I1010 17:40:13.559877   50183 build_images.go:161] Building image from path: /tmp/build.2182050735.tar
I1010 17:40:13.559937   50183 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1010 17:40:13.568950   50183 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2182050735.tar
I1010 17:40:13.572774   50183 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2182050735.tar: stat -c "%s %y" /var/lib/minikube/build/build.2182050735.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2182050735.tar': No such file or directory
I1010 17:40:13.572802   50183 ssh_runner.go:362] scp /tmp/build.2182050735.tar --> /var/lib/minikube/build/build.2182050735.tar (3072 bytes)
I1010 17:40:13.593534   50183 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2182050735
I1010 17:40:13.602332   50183 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2182050735 -xf /var/lib/minikube/build/build.2182050735.tar
I1010 17:40:13.611728   50183 crio.go:315] Building image: /var/lib/minikube/build/build.2182050735
I1010 17:40:13.611795   50183 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-728643 /var/lib/minikube/build/build.2182050735 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1010 17:40:16.292293   50183 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-728643 /var/lib/minikube/build/build.2182050735 --cgroup-manager=cgroupfs: (2.680468517s)
I1010 17:40:16.292375   50183 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2182050735
I1010 17:40:16.301215   50183 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2182050735.tar
I1010 17:40:16.309260   50183 build_images.go:217] Built localhost/my-image:functional-728643 from /tmp/build.2182050735.tar
I1010 17:40:16.309287   50183 build_images.go:133] succeeded building to: functional-728643
I1010 17:40:16.309293   50183 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 image ls
E1010 17:40:27.971621    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 17:42:44.104818    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 17:43:11.813212    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 17:47:44.105063    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.984916346s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-728643
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 image rm kicbase/echo-server:functional-728643 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "326.64529ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "50.739488ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "326.421154ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "50.022339ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-728643 /tmp/TestFunctionalparallelMountCmdany-port2975886826/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760118001205561828" to /tmp/TestFunctionalparallelMountCmdany-port2975886826/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760118001205561828" to /tmp/TestFunctionalparallelMountCmdany-port2975886826/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760118001205561828" to /tmp/TestFunctionalparallelMountCmdany-port2975886826/001/test-1760118001205561828
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728643 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (272.744474ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1010 17:40:01.478581    9354 retry.go:31] will retry after 388.817395ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 10 17:40 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 10 17:40 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 10 17:40 test-1760118001205561828
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh cat /mount-9p/test-1760118001205561828
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-728643 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [5447066e-1e9e-4633-b5a8-a0593e3fae43] Pending
helpers_test.go:352: "busybox-mount" [5447066e-1e9e-4633-b5a8-a0593e3fae43] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [5447066e-1e9e-4633-b5a8-a0593e3fae43] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [5447066e-1e9e-4633-b5a8-a0593e3fae43] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003738177s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-728643 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-728643 /tmp/TestFunctionalparallelMountCmdany-port2975886826/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.58s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-728643 /tmp/TestFunctionalparallelMountCmdspecific-port3742260193/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728643 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (268.885699ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1010 17:40:08.055439    9354 retry.go:31] will retry after 655.488089ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-728643 /tmp/TestFunctionalparallelMountCmdspecific-port3742260193/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728643 ssh "sudo umount -f /mount-9p": exit status 1 (295.582187ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-728643 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-728643 /tmp/TestFunctionalparallelMountCmdspecific-port3742260193/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-728643 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1823634637/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-728643 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1823634637/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-728643 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1823634637/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728643 ssh "findmnt -T" /mount1: exit status 1 (377.711174ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1010 17:40:10.201655    9354 retry.go:31] will retry after 590.356265ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-728643 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-728643 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1823634637/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-728643 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1823634637/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-728643 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1823634637/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-728643 service list: (1.702482045s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-728643 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-728643 service list -o json: (1.692745545s)
functional_test.go:1504: Took "1.692843126s" to run "out/minikube-linux-amd64 -p functional-728643 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-728643
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-728643
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-728643
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (175.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1010 17:52:44.104245    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-260202 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m54.565708626s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (175.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-260202 kubectl -- rollout status deployment/busybox: (2.965345165s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 kubectl -- exec busybox-7b57f96db7-654g5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 kubectl -- exec busybox-7b57f96db7-nb49f -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 kubectl -- exec busybox-7b57f96db7-z5fzk -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 kubectl -- exec busybox-7b57f96db7-654g5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 kubectl -- exec busybox-7b57f96db7-nb49f -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 kubectl -- exec busybox-7b57f96db7-z5fzk -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 kubectl -- exec busybox-7b57f96db7-654g5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 kubectl -- exec busybox-7b57f96db7-nb49f -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 kubectl -- exec busybox-7b57f96db7-z5fzk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 kubectl -- exec busybox-7b57f96db7-654g5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 kubectl -- exec busybox-7b57f96db7-654g5 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 kubectl -- exec busybox-7b57f96db7-nb49f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 kubectl -- exec busybox-7b57f96db7-nb49f -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 kubectl -- exec busybox-7b57f96db7-z5fzk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 kubectl -- exec busybox-7b57f96db7-z5fzk -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (27.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-260202 node add --alsologtostderr -v 5: (26.507683654s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (27.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-260202 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 cp testdata/cp-test.txt ha-260202:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 ssh -n ha-260202 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 cp ha-260202:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3663296795/001/cp-test_ha-260202.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 ssh -n ha-260202 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 cp ha-260202:/home/docker/cp-test.txt ha-260202-m02:/home/docker/cp-test_ha-260202_ha-260202-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 ssh -n ha-260202 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 ssh -n ha-260202-m02 "sudo cat /home/docker/cp-test_ha-260202_ha-260202-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 cp ha-260202:/home/docker/cp-test.txt ha-260202-m03:/home/docker/cp-test_ha-260202_ha-260202-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 ssh -n ha-260202 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 ssh -n ha-260202-m03 "sudo cat /home/docker/cp-test_ha-260202_ha-260202-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 cp ha-260202:/home/docker/cp-test.txt ha-260202-m04:/home/docker/cp-test_ha-260202_ha-260202-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 ssh -n ha-260202 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 ssh -n ha-260202-m04 "sudo cat /home/docker/cp-test_ha-260202_ha-260202-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 cp testdata/cp-test.txt ha-260202-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 ssh -n ha-260202-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 cp ha-260202-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3663296795/001/cp-test_ha-260202-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 ssh -n ha-260202-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 cp ha-260202-m02:/home/docker/cp-test.txt ha-260202:/home/docker/cp-test_ha-260202-m02_ha-260202.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 ssh -n ha-260202-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 ssh -n ha-260202 "sudo cat /home/docker/cp-test_ha-260202-m02_ha-260202.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 cp ha-260202-m02:/home/docker/cp-test.txt ha-260202-m03:/home/docker/cp-test_ha-260202-m02_ha-260202-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 ssh -n ha-260202-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 ssh -n ha-260202-m03 "sudo cat /home/docker/cp-test_ha-260202-m02_ha-260202-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 cp ha-260202-m02:/home/docker/cp-test.txt ha-260202-m04:/home/docker/cp-test_ha-260202-m02_ha-260202-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 ssh -n ha-260202-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 ssh -n ha-260202-m04 "sudo cat /home/docker/cp-test_ha-260202-m02_ha-260202-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 cp testdata/cp-test.txt ha-260202-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 ssh -n ha-260202-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 cp ha-260202-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3663296795/001/cp-test_ha-260202-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 ssh -n ha-260202-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 cp ha-260202-m03:/home/docker/cp-test.txt ha-260202:/home/docker/cp-test_ha-260202-m03_ha-260202.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 ssh -n ha-260202-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 ssh -n ha-260202 "sudo cat /home/docker/cp-test_ha-260202-m03_ha-260202.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 cp ha-260202-m03:/home/docker/cp-test.txt ha-260202-m02:/home/docker/cp-test_ha-260202-m03_ha-260202-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 ssh -n ha-260202-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 ssh -n ha-260202-m02 "sudo cat /home/docker/cp-test_ha-260202-m03_ha-260202-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 cp ha-260202-m03:/home/docker/cp-test.txt ha-260202-m04:/home/docker/cp-test_ha-260202-m03_ha-260202-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 ssh -n ha-260202-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 ssh -n ha-260202-m04 "sudo cat /home/docker/cp-test_ha-260202-m03_ha-260202-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 cp testdata/cp-test.txt ha-260202-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 ssh -n ha-260202-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 cp ha-260202-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3663296795/001/cp-test_ha-260202-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 ssh -n ha-260202-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 cp ha-260202-m04:/home/docker/cp-test.txt ha-260202:/home/docker/cp-test_ha-260202-m04_ha-260202.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 ssh -n ha-260202-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 ssh -n ha-260202 "sudo cat /home/docker/cp-test_ha-260202-m04_ha-260202.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 cp ha-260202-m04:/home/docker/cp-test.txt ha-260202-m02:/home/docker/cp-test_ha-260202-m04_ha-260202-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 ssh -n ha-260202-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 ssh -n ha-260202-m02 "sudo cat /home/docker/cp-test_ha-260202-m04_ha-260202-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 cp ha-260202-m04:/home/docker/cp-test.txt ha-260202-m03:/home/docker/cp-test_ha-260202-m04_ha-260202-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 ssh -n ha-260202-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 ssh -n ha-260202-m03 "sudo cat /home/docker/cp-test_ha-260202-m04_ha-260202-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (19.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-260202 node stop m02 --alsologtostderr -v 5: (19.055057555s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-260202 status --alsologtostderr -v 5: exit status 7 (675.090465ms)

                                                
                                                
-- stdout --
	ha-260202
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-260202-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-260202-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-260202-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 17:53:54.903704   74714 out.go:360] Setting OutFile to fd 1 ...
	I1010 17:53:54.903907   74714 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:53:54.903914   74714 out.go:374] Setting ErrFile to fd 2...
	I1010 17:53:54.903917   74714 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:53:54.904142   74714 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 17:53:54.904302   74714 out.go:368] Setting JSON to false
	I1010 17:53:54.904330   74714 mustload.go:65] Loading cluster: ha-260202
	I1010 17:53:54.904410   74714 notify.go:220] Checking for updates...
	I1010 17:53:54.904855   74714 config.go:182] Loaded profile config "ha-260202": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:53:54.904877   74714 status.go:174] checking status of ha-260202 ...
	I1010 17:53:54.905350   74714 cli_runner.go:164] Run: docker container inspect ha-260202 --format={{.State.Status}}
	I1010 17:53:54.924525   74714 status.go:371] ha-260202 host status = "Running" (err=<nil>)
	I1010 17:53:54.924553   74714 host.go:66] Checking if "ha-260202" exists ...
	I1010 17:53:54.924825   74714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-260202
	I1010 17:53:54.942652   74714 host.go:66] Checking if "ha-260202" exists ...
	I1010 17:53:54.942957   74714 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1010 17:53:54.942992   74714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-260202
	I1010 17:53:54.960767   74714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/ha-260202/id_rsa Username:docker}
	I1010 17:53:55.054518   74714 ssh_runner.go:195] Run: systemctl --version
	I1010 17:53:55.061145   74714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 17:53:55.074273   74714 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 17:53:55.129046   74714 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-10 17:53:55.118966183 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 17:53:55.129741   74714 kubeconfig.go:125] found "ha-260202" server: "https://192.168.49.254:8443"
	I1010 17:53:55.129774   74714 api_server.go:166] Checking apiserver status ...
	I1010 17:53:55.129820   74714 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 17:53:55.142304   74714 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1248/cgroup
	W1010 17:53:55.151787   74714 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1248/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1010 17:53:55.151838   74714 ssh_runner.go:195] Run: ls
	I1010 17:53:55.155753   74714 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1010 17:53:55.159806   74714 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1010 17:53:55.159833   74714 status.go:463] ha-260202 apiserver status = Running (err=<nil>)
	I1010 17:53:55.159844   74714 status.go:176] ha-260202 status: &{Name:ha-260202 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1010 17:53:55.159871   74714 status.go:174] checking status of ha-260202-m02 ...
	I1010 17:53:55.160191   74714 cli_runner.go:164] Run: docker container inspect ha-260202-m02 --format={{.State.Status}}
	I1010 17:53:55.177483   74714 status.go:371] ha-260202-m02 host status = "Stopped" (err=<nil>)
	I1010 17:53:55.177503   74714 status.go:384] host is not running, skipping remaining checks
	I1010 17:53:55.177510   74714 status.go:176] ha-260202-m02 status: &{Name:ha-260202-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1010 17:53:55.177536   74714 status.go:174] checking status of ha-260202-m03 ...
	I1010 17:53:55.177828   74714 cli_runner.go:164] Run: docker container inspect ha-260202-m03 --format={{.State.Status}}
	I1010 17:53:55.195913   74714 status.go:371] ha-260202-m03 host status = "Running" (err=<nil>)
	I1010 17:53:55.195937   74714 host.go:66] Checking if "ha-260202-m03" exists ...
	I1010 17:53:55.196246   74714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-260202-m03
	I1010 17:53:55.213937   74714 host.go:66] Checking if "ha-260202-m03" exists ...
	I1010 17:53:55.214259   74714 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1010 17:53:55.214302   74714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-260202-m03
	I1010 17:53:55.232865   74714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/ha-260202-m03/id_rsa Username:docker}
	I1010 17:53:55.327412   74714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 17:53:55.340882   74714 kubeconfig.go:125] found "ha-260202" server: "https://192.168.49.254:8443"
	I1010 17:53:55.340914   74714 api_server.go:166] Checking apiserver status ...
	I1010 17:53:55.340957   74714 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 17:53:55.352786   74714 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup
	W1010 17:53:55.361951   74714 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1202/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1010 17:53:55.362005   74714 ssh_runner.go:195] Run: ls
	I1010 17:53:55.365545   74714 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1010 17:53:55.370384   74714 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1010 17:53:55.370408   74714 status.go:463] ha-260202-m03 apiserver status = Running (err=<nil>)
	I1010 17:53:55.370418   74714 status.go:176] ha-260202-m03 status: &{Name:ha-260202-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1010 17:53:55.370436   74714 status.go:174] checking status of ha-260202-m04 ...
	I1010 17:53:55.370663   74714 cli_runner.go:164] Run: docker container inspect ha-260202-m04 --format={{.State.Status}}
	I1010 17:53:55.388881   74714 status.go:371] ha-260202-m04 host status = "Running" (err=<nil>)
	I1010 17:53:55.388903   74714 host.go:66] Checking if "ha-260202-m04" exists ...
	I1010 17:53:55.389197   74714 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-260202-m04
	I1010 17:53:55.406792   74714 host.go:66] Checking if "ha-260202-m04" exists ...
	I1010 17:53:55.407068   74714 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1010 17:53:55.407124   74714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-260202-m04
	I1010 17:53:55.423936   74714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/ha-260202-m04/id_rsa Username:docker}
	I1010 17:53:55.517256   74714 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 17:53:55.530595   74714 status.go:176] ha-260202-m04 status: &{Name:ha-260202-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (19.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 node start m02 --alsologtostderr -v 5
E1010 17:54:03.192031    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/functional-728643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 17:54:03.198969    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/functional-728643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 17:54:03.210490    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/functional-728643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 17:54:03.232782    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/functional-728643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 17:54:03.274076    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/functional-728643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 17:54:03.355498    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/functional-728643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 17:54:03.517402    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/functional-728643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 17:54:03.839679    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/functional-728643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 17:54:04.481296    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/functional-728643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 17:54:05.762858    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/functional-728643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 17:54:07.175192    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 17:54:08.324572    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/functional-728643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-260202 node start m02 --alsologtostderr -v 5: (13.565331212s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (118.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 stop --alsologtostderr -v 5
E1010 17:54:13.446657    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/functional-728643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 17:54:23.688890    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/functional-728643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 17:54:44.170394    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/functional-728643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-260202 stop --alsologtostderr -v 5: (50.106094633s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 start --wait true --alsologtostderr -v 5
E1010 17:55:25.132295    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/functional-728643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-260202 start --wait true --alsologtostderr -v 5: (1m8.282573273s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (118.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-260202 node delete m03 --alsologtostderr -v 5: (9.729889596s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (41.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 stop --alsologtostderr -v 5
E1010 17:56:47.054588    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/functional-728643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-260202 stop --alsologtostderr -v 5: (40.959621423s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-260202 status --alsologtostderr -v 5: exit status 7 (105.08206ms)

                                                
                                                
-- stdout --
	ha-260202
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-260202-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-260202-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 17:57:02.308167   88995 out.go:360] Setting OutFile to fd 1 ...
	I1010 17:57:02.308462   88995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:57:02.308473   88995 out.go:374] Setting ErrFile to fd 2...
	I1010 17:57:02.308477   88995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 17:57:02.308720   88995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 17:57:02.308942   88995 out.go:368] Setting JSON to false
	I1010 17:57:02.308976   88995 mustload.go:65] Loading cluster: ha-260202
	I1010 17:57:02.309088   88995 notify.go:220] Checking for updates...
	I1010 17:57:02.309440   88995 config.go:182] Loaded profile config "ha-260202": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 17:57:02.309456   88995 status.go:174] checking status of ha-260202 ...
	I1010 17:57:02.309900   88995 cli_runner.go:164] Run: docker container inspect ha-260202 --format={{.State.Status}}
	I1010 17:57:02.328804   88995 status.go:371] ha-260202 host status = "Stopped" (err=<nil>)
	I1010 17:57:02.328826   88995 status.go:384] host is not running, skipping remaining checks
	I1010 17:57:02.328831   88995 status.go:176] ha-260202 status: &{Name:ha-260202 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1010 17:57:02.328856   88995 status.go:174] checking status of ha-260202-m02 ...
	I1010 17:57:02.329105   88995 cli_runner.go:164] Run: docker container inspect ha-260202-m02 --format={{.State.Status}}
	I1010 17:57:02.347820   88995 status.go:371] ha-260202-m02 host status = "Stopped" (err=<nil>)
	I1010 17:57:02.347840   88995 status.go:384] host is not running, skipping remaining checks
	I1010 17:57:02.347845   88995 status.go:176] ha-260202-m02 status: &{Name:ha-260202-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1010 17:57:02.347862   88995 status.go:174] checking status of ha-260202-m04 ...
	I1010 17:57:02.348102   88995 cli_runner.go:164] Run: docker container inspect ha-260202-m04 --format={{.State.Status}}
	I1010 17:57:02.365969   88995 status.go:371] ha-260202-m04 host status = "Stopped" (err=<nil>)
	I1010 17:57:02.365993   88995 status.go:384] host is not running, skipping remaining checks
	I1010 17:57:02.365998   88995 status.go:176] ha-260202-m04 status: &{Name:ha-260202-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (41.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (51.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1010 17:57:44.104210    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-260202 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (50.755887932s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (51.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 node add --control-plane --alsologtostderr -v 5
E1010 17:59:03.193136    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/functional-728643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-260202 node add --control-plane --alsologtostderr -v 5: (1m15.561488632s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-260202 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                    
x
+
TestJSONOutput/start/Command (42.12s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-509145 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1010 17:59:30.902178    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/functional-728643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-509145 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (42.122600437s)
--- PASS: TestJSONOutput/start/Command (42.12s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.15s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-509145 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-509145 --output=json --user=testUser: (6.145781939s)
--- PASS: TestJSONOutput/stop/Command (6.15s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-832583 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-832583 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (61.09003ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0933c698-550b-44c1-a335-471f77893af7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-832583] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6bfcdd84-4886-456d-a66d-92a2fdd7d8a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21724"}}
	{"specversion":"1.0","id":"40f8f1c6-0972-407e-b461-99dbdc7a536e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bf4c7f3c-7d5f-46fe-a7b1-692e3c4db315","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21724-5815/kubeconfig"}}
	{"specversion":"1.0","id":"f60519ae-d56e-4dbd-8a00-9298f3f1a37b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-5815/.minikube"}}
	{"specversion":"1.0","id":"55b1c6f9-dac5-456d-a0bb-f0b9bcfc5669","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"3b3d598b-25e5-4515-b326-c967994159a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9c4fe348-d52c-400b-9436-3ca4e318285d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-832583" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-832583
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.43s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-716359 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-716359 --network=: (33.265173988s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-716359" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-716359
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-716359: (2.142702485s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.43s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.6s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-229297 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-229297 --network=bridge: (21.611914406s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-229297" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-229297
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-229297: (1.966619367s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.60s)

                                                
                                    
x
+
TestKicExistingNetwork (23.27s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1010 18:01:16.912280    9354 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1010 18:01:16.929124    9354 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1010 18:01:16.929205    9354 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1010 18:01:16.929227    9354 cli_runner.go:164] Run: docker network inspect existing-network
W1010 18:01:16.945271    9354 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1010 18:01:16.945299    9354 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1010 18:01:16.945315    9354 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1010 18:01:16.945461    9354 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1010 18:01:16.961733    9354 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3f8fb0c8a54c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:1a:51:a2:ab:ca:d6} reservation:<nil>}
I1010 18:01:16.962183    9354 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d84620}
I1010 18:01:16.962211    9354 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1010 18:01:16.962260    9354 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1010 18:01:17.019877    9354 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-898204 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-898204 --network=existing-network: (21.16447791s)
helpers_test.go:175: Cleaning up "existing-network-898204" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-898204
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-898204: (1.964385488s)
I1010 18:01:40.166494    9354 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (23.27s)

                                                
                                    
x
+
TestKicCustomSubnet (26.12s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-752946 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-752946 --subnet=192.168.60.0/24: (23.932018018s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-752946 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-752946" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-752946
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-752946: (2.162926109s)
--- PASS: TestKicCustomSubnet (26.12s)

                                                
                                    
x
+
TestKicStaticIP (25.23s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-928143 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-928143 --static-ip=192.168.200.200: (22.995877007s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-928143 ip
helpers_test.go:175: Cleaning up "static-ip-928143" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-928143
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-928143: (2.104196041s)
--- PASS: TestKicStaticIP (25.23s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (47.92s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-196099 --driver=docker  --container-runtime=crio
E1010 18:02:44.104205    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-196099 --driver=docker  --container-runtime=crio: (20.12789458s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-198410 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-198410 --driver=docker  --container-runtime=crio: (21.894273386s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-196099
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-198410
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-198410" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-198410
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-198410: (2.363406732s)
helpers_test.go:175: Cleaning up "first-196099" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-196099
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-196099: (2.364635129s)
--- PASS: TestMinikubeProfile (47.92s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.81s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-713054 --memory=3072 --mount-string /tmp/TestMountStartserial3371043710/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-713054 --memory=3072 --mount-string /tmp/TestMountStartserial3371043710/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.80475562s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-713054 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.04s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-724267 --memory=3072 --mount-string /tmp/TestMountStartserial3371043710/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-724267 --memory=3072 --mount-string /tmp/TestMountStartserial3371043710/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.038691551s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-724267 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-713054 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-713054 --alsologtostderr -v=5: (1.69415461s)
--- PASS: TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-724267 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-724267
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-724267: (1.242957534s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.49s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-724267
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-724267: (6.484963623s)
--- PASS: TestMountStart/serial/RestartStopped (7.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-724267 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (91.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-030207 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1010 18:04:03.192380    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/functional-728643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-030207 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m31.260928907s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (91.73s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030207 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030207 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-030207 -- rollout status deployment/busybox: (2.976748318s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030207 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030207 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030207 -- exec busybox-7b57f96db7-8x2r6 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030207 -- exec busybox-7b57f96db7-bbs8d -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030207 -- exec busybox-7b57f96db7-8x2r6 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030207 -- exec busybox-7b57f96db7-bbs8d -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030207 -- exec busybox-7b57f96db7-8x2r6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030207 -- exec busybox-7b57f96db7-bbs8d -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.31s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030207 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030207 -- exec busybox-7b57f96db7-8x2r6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030207 -- exec busybox-7b57f96db7-8x2r6 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030207 -- exec busybox-7b57f96db7-bbs8d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-030207 -- exec busybox-7b57f96db7-bbs8d -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (23.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-030207 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-030207 -v=5 --alsologtostderr: (23.347987034s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (23.98s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-030207 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 cp testdata/cp-test.txt multinode-030207:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 ssh -n multinode-030207 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 cp multinode-030207:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile56568813/001/cp-test_multinode-030207.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 ssh -n multinode-030207 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 cp multinode-030207:/home/docker/cp-test.txt multinode-030207-m02:/home/docker/cp-test_multinode-030207_multinode-030207-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 ssh -n multinode-030207 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 ssh -n multinode-030207-m02 "sudo cat /home/docker/cp-test_multinode-030207_multinode-030207-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 cp multinode-030207:/home/docker/cp-test.txt multinode-030207-m03:/home/docker/cp-test_multinode-030207_multinode-030207-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 ssh -n multinode-030207 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 ssh -n multinode-030207-m03 "sudo cat /home/docker/cp-test_multinode-030207_multinode-030207-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 cp testdata/cp-test.txt multinode-030207-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 ssh -n multinode-030207-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 cp multinode-030207-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile56568813/001/cp-test_multinode-030207-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 ssh -n multinode-030207-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 cp multinode-030207-m02:/home/docker/cp-test.txt multinode-030207:/home/docker/cp-test_multinode-030207-m02_multinode-030207.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 ssh -n multinode-030207-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 ssh -n multinode-030207 "sudo cat /home/docker/cp-test_multinode-030207-m02_multinode-030207.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 cp multinode-030207-m02:/home/docker/cp-test.txt multinode-030207-m03:/home/docker/cp-test_multinode-030207-m02_multinode-030207-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 ssh -n multinode-030207-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 ssh -n multinode-030207-m03 "sudo cat /home/docker/cp-test_multinode-030207-m02_multinode-030207-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 cp testdata/cp-test.txt multinode-030207-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 ssh -n multinode-030207-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 cp multinode-030207-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile56568813/001/cp-test_multinode-030207-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 ssh -n multinode-030207-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 cp multinode-030207-m03:/home/docker/cp-test.txt multinode-030207:/home/docker/cp-test_multinode-030207-m03_multinode-030207.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 ssh -n multinode-030207-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 ssh -n multinode-030207 "sudo cat /home/docker/cp-test_multinode-030207-m03_multinode-030207.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 cp multinode-030207-m03:/home/docker/cp-test.txt multinode-030207-m02:/home/docker/cp-test_multinode-030207-m03_multinode-030207-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 ssh -n multinode-030207-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 ssh -n multinode-030207-m02 "sudo cat /home/docker/cp-test_multinode-030207-m03_multinode-030207-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.37s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-030207 node stop m03: (1.24785043s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-030207 status: exit status 7 (470.259294ms)

                                                
                                                
-- stdout --
	multinode-030207
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-030207-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-030207-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-030207 status --alsologtostderr: exit status 7 (475.510204ms)

                                                
                                                
-- stdout --
	multinode-030207
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-030207-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-030207-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 18:05:58.148629  149316 out.go:360] Setting OutFile to fd 1 ...
	I1010 18:05:58.148851  149316 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:05:58.148861  149316 out.go:374] Setting ErrFile to fd 2...
	I1010 18:05:58.148865  149316 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:05:58.149115  149316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 18:05:58.149298  149316 out.go:368] Setting JSON to false
	I1010 18:05:58.149326  149316 mustload.go:65] Loading cluster: multinode-030207
	I1010 18:05:58.149373  149316 notify.go:220] Checking for updates...
	I1010 18:05:58.149820  149316 config.go:182] Loaded profile config "multinode-030207": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:05:58.149842  149316 status.go:174] checking status of multinode-030207 ...
	I1010 18:05:58.150369  149316 cli_runner.go:164] Run: docker container inspect multinode-030207 --format={{.State.Status}}
	I1010 18:05:58.171488  149316 status.go:371] multinode-030207 host status = "Running" (err=<nil>)
	I1010 18:05:58.171520  149316 host.go:66] Checking if "multinode-030207" exists ...
	I1010 18:05:58.171863  149316 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-030207
	I1010 18:05:58.189772  149316 host.go:66] Checking if "multinode-030207" exists ...
	I1010 18:05:58.190087  149316 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1010 18:05:58.190154  149316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-030207
	I1010 18:05:58.207456  149316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/multinode-030207/id_rsa Username:docker}
	I1010 18:05:58.300276  149316 ssh_runner.go:195] Run: systemctl --version
	I1010 18:05:58.306473  149316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:05:58.318853  149316 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:05:58.376070  149316 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-10 18:05:58.365414789 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:05:58.376581  149316 kubeconfig.go:125] found "multinode-030207" server: "https://192.168.67.2:8443"
	I1010 18:05:58.376611  149316 api_server.go:166] Checking apiserver status ...
	I1010 18:05:58.376650  149316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1010 18:05:58.388416  149316 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1248/cgroup
	W1010 18:05:58.397262  149316 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1248/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1010 18:05:58.397316  149316 ssh_runner.go:195] Run: ls
	I1010 18:05:58.400781  149316 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1010 18:05:58.405677  149316 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1010 18:05:58.405695  149316 status.go:463] multinode-030207 apiserver status = Running (err=<nil>)
	I1010 18:05:58.405702  149316 status.go:176] multinode-030207 status: &{Name:multinode-030207 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1010 18:05:58.405725  149316 status.go:174] checking status of multinode-030207-m02 ...
	I1010 18:05:58.405935  149316 cli_runner.go:164] Run: docker container inspect multinode-030207-m02 --format={{.State.Status}}
	I1010 18:05:58.422605  149316 status.go:371] multinode-030207-m02 host status = "Running" (err=<nil>)
	I1010 18:05:58.422626  149316 host.go:66] Checking if "multinode-030207-m02" exists ...
	I1010 18:05:58.422874  149316 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-030207-m02
	I1010 18:05:58.439126  149316 host.go:66] Checking if "multinode-030207-m02" exists ...
	I1010 18:05:58.439350  149316 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1010 18:05:58.439380  149316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-030207-m02
	I1010 18:05:58.455232  149316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21724-5815/.minikube/machines/multinode-030207-m02/id_rsa Username:docker}
	I1010 18:05:58.547028  149316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1010 18:05:58.559731  149316 status.go:176] multinode-030207-m02 status: &{Name:multinode-030207-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1010 18:05:58.559780  149316 status.go:174] checking status of multinode-030207-m03 ...
	I1010 18:05:58.560041  149316 cli_runner.go:164] Run: docker container inspect multinode-030207-m03 --format={{.State.Status}}
	I1010 18:05:58.577511  149316 status.go:371] multinode-030207-m03 host status = "Stopped" (err=<nil>)
	I1010 18:05:58.577531  149316 status.go:384] host is not running, skipping remaining checks
	I1010 18:05:58.577536  149316 status.go:176] multinode-030207-m03 status: &{Name:multinode-030207-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.19s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-030207 node start m03 -v=5 --alsologtostderr: (6.618751799s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.30s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (76.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-030207
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-030207
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-030207: (29.462188912s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-030207 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-030207 --wait=true -v=5 --alsologtostderr: (47.230587329s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-030207
--- PASS: TestMultiNode/serial/RestartKeepsNodes (76.79s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-030207 node delete m03: (4.62237773s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.20s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 stop
E1010 18:07:44.104389    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-030207 stop: (28.373911513s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-030207 status: exit status 7 (84.210851ms)

                                                
                                                
-- stdout --
	multinode-030207
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-030207-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-030207 status --alsologtostderr: exit status 7 (81.821284ms)

                                                
                                                
-- stdout --
	multinode-030207
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-030207-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 18:07:56.377563  159031 out.go:360] Setting OutFile to fd 1 ...
	I1010 18:07:56.377836  159031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:07:56.377846  159031 out.go:374] Setting ErrFile to fd 2...
	I1010 18:07:56.377854  159031 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:07:56.378126  159031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 18:07:56.378309  159031 out.go:368] Setting JSON to false
	I1010 18:07:56.378342  159031 mustload.go:65] Loading cluster: multinode-030207
	I1010 18:07:56.378396  159031 notify.go:220] Checking for updates...
	I1010 18:07:56.378722  159031 config.go:182] Loaded profile config "multinode-030207": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:07:56.378739  159031 status.go:174] checking status of multinode-030207 ...
	I1010 18:07:56.379162  159031 cli_runner.go:164] Run: docker container inspect multinode-030207 --format={{.State.Status}}
	I1010 18:07:56.397797  159031 status.go:371] multinode-030207 host status = "Stopped" (err=<nil>)
	I1010 18:07:56.397815  159031 status.go:384] host is not running, skipping remaining checks
	I1010 18:07:56.397821  159031 status.go:176] multinode-030207 status: &{Name:multinode-030207 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1010 18:07:56.397840  159031 status.go:174] checking status of multinode-030207-m02 ...
	I1010 18:07:56.398129  159031 cli_runner.go:164] Run: docker container inspect multinode-030207-m02 --format={{.State.Status}}
	I1010 18:07:56.414893  159031 status.go:371] multinode-030207-m02 host status = "Stopped" (err=<nil>)
	I1010 18:07:56.414912  159031 status.go:384] host is not running, skipping remaining checks
	I1010 18:07:56.414917  159031 status.go:176] multinode-030207-m02 status: &{Name:multinode-030207-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.54s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-030207 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-030207 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (48.104870592s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-030207 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.69s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-030207
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-030207-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-030207-m02 --driver=docker  --container-runtime=crio: exit status 14 (61.229909ms)

                                                
                                                
-- stdout --
	* [multinode-030207-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-5815/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-5815/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-030207-m02' is duplicated with machine name 'multinode-030207-m02' in profile 'multinode-030207'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-030207-m03 --driver=docker  --container-runtime=crio
E1010 18:09:03.200345    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/functional-728643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-030207-m03 --driver=docker  --container-runtime=crio: (20.513057398s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-030207
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-030207: exit status 80 (281.093967ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-030207 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-030207-m03 already exists in multinode-030207-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-030207-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-030207-m03: (2.381894801s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.28s)

                                                
                                    
x
+
TestPreload (94.14s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-242245 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-242245 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (47.89193341s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-242245 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-242245 image pull gcr.io/k8s-minikube/busybox: (2.212794555s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-242245
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-242245: (5.972510441s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-242245 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1010 18:10:26.264385    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/functional-728643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-242245 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (35.433859069s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-242245 image list
helpers_test.go:175: Cleaning up "test-preload-242245" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-242245
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-242245: (2.409867501s)
--- PASS: TestPreload (94.14s)

                                                
                                    
x
+
TestScheduledStopUnix (98.52s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-216203 --memory=3072 --driver=docker  --container-runtime=crio
E1010 18:10:47.177155    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-216203 --memory=3072 --driver=docker  --container-runtime=crio: (22.908153823s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-216203 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-216203 -n scheduled-stop-216203
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-216203 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1010 18:11:10.015146    9354 retry.go:31] will retry after 115.978µs: open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/scheduled-stop-216203/pid: no such file or directory
I1010 18:11:10.016321    9354 retry.go:31] will retry after 210.762µs: open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/scheduled-stop-216203/pid: no such file or directory
I1010 18:11:10.017462    9354 retry.go:31] will retry after 208.64µs: open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/scheduled-stop-216203/pid: no such file or directory
I1010 18:11:10.018591    9354 retry.go:31] will retry after 395.325µs: open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/scheduled-stop-216203/pid: no such file or directory
I1010 18:11:10.019724    9354 retry.go:31] will retry after 620.677µs: open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/scheduled-stop-216203/pid: no such file or directory
I1010 18:11:10.020846    9354 retry.go:31] will retry after 866.528µs: open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/scheduled-stop-216203/pid: no such file or directory
I1010 18:11:10.021966    9354 retry.go:31] will retry after 1.60916ms: open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/scheduled-stop-216203/pid: no such file or directory
I1010 18:11:10.024164    9354 retry.go:31] will retry after 1.530239ms: open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/scheduled-stop-216203/pid: no such file or directory
I1010 18:11:10.026356    9354 retry.go:31] will retry after 3.701859ms: open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/scheduled-stop-216203/pid: no such file or directory
I1010 18:11:10.030583    9354 retry.go:31] will retry after 3.579247ms: open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/scheduled-stop-216203/pid: no such file or directory
I1010 18:11:10.034807    9354 retry.go:31] will retry after 4.384244ms: open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/scheduled-stop-216203/pid: no such file or directory
I1010 18:11:10.040087    9354 retry.go:31] will retry after 11.481172ms: open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/scheduled-stop-216203/pid: no such file or directory
I1010 18:11:10.052328    9354 retry.go:31] will retry after 6.778603ms: open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/scheduled-stop-216203/pid: no such file or directory
I1010 18:11:10.059580    9354 retry.go:31] will retry after 10.92945ms: open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/scheduled-stop-216203/pid: no such file or directory
I1010 18:11:10.070879    9354 retry.go:31] will retry after 42.283407ms: open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/scheduled-stop-216203/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-216203 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-216203 -n scheduled-stop-216203
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-216203
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-216203 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-216203
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-216203: exit status 7 (64.951093ms)

                                                
                                                
-- stdout --
	scheduled-stop-216203
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-216203 -n scheduled-stop-216203
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-216203 -n scheduled-stop-216203: exit status 7 (63.260048ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-216203" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-216203
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-216203: (4.258475385s)
--- PASS: TestScheduledStopUnix (98.52s)

                                                
                                    
x
+
TestInsufficientStorage (9.61s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-807620 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-807620 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.141485492s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"db9cda7c-21ae-417c-be80-78e8a6ec5bbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-807620] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"96774e1e-41a4-4684-85e0-b81bc0f6998d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21724"}}
	{"specversion":"1.0","id":"d830e7f8-6336-4b3a-9504-76e0dc0e8db4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d5e31abf-a8fe-4df2-98f2-1990bf93ff5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21724-5815/kubeconfig"}}
	{"specversion":"1.0","id":"e0aa9bb4-fe24-4cfd-b477-0946867db5b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-5815/.minikube"}}
	{"specversion":"1.0","id":"f35c894a-4c99-4c72-aff3-a56c97ce9329","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"0700c8da-c117-4121-a975-923d8e959c84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e93d5023-a993-4880-a338-2f03deb6a27c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"e0a8b57d-6234-443c-92b1-3d97a60d9731","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"84b19dc9-a6a7-4b97-82e4-2b444bcb1b03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5038e378-1525-497d-a884-20bd4e040f1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"4626ee0e-8e29-4b84-bff5-21f43d1f2f81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-807620\" primary control-plane node in \"insufficient-storage-807620\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1f9add13-848f-4912-896c-18be60374538","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760103811-21724 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"3ae5c345-e97a-4f47-9a8d-a64f768144c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"71878177-077d-49a5-afed-3cff1a9b1b31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-807620 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-807620 --output=json --layout=cluster: exit status 7 (274.4279ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-807620","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-807620","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1010 18:12:32.621150  179439 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-807620" does not appear in /home/jenkins/minikube-integration/21724-5815/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-807620 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-807620 --output=json --layout=cluster: exit status 7 (271.168527ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-807620","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-807620","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1010 18:12:32.892918  179547 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-807620" does not appear in /home/jenkins/minikube-integration/21724-5815/kubeconfig
	E1010 18:12:32.903879  179547 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/insufficient-storage-807620/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-807620" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-807620
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-807620: (1.9174108s)
--- PASS: TestInsufficientStorage (9.61s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (47.49s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1377071497 start -p running-upgrade-390393 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1377071497 start -p running-upgrade-390393 --memory=3072 --vm-driver=docker  --container-runtime=crio: (20.019284935s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-390393 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-390393 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.972819181s)
helpers_test.go:175: Cleaning up "running-upgrade-390393" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-390393
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-390393: (2.409741529s)
--- PASS: TestRunningBinaryUpgrade (47.49s)

                                                
                                    
x
+
TestKubernetesUpgrade (316.37s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-274910 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-274910 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.579196794s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-274910
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-274910: (2.021228806s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-274910 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-274910 status --format={{.Host}}: exit status 7 (81.197803ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-274910 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-274910 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m30.173172005s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-274910 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-274910 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-274910 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (76.463883ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-274910] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-5815/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-5815/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-274910
	    minikube start -p kubernetes-upgrade-274910 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2749102 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-274910 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-274910 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-274910 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.759581218s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-274910" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-274910
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-274910: (2.612505403s)
--- PASS: TestKubernetesUpgrade (316.37s)

                                                
                                    
x
+
TestMissingContainerUpgrade (98.99s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.900753913 start -p missing-upgrade-085473 --memory=3072 --driver=docker  --container-runtime=crio
E1010 18:14:03.192115    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/functional-728643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.900753913 start -p missing-upgrade-085473 --memory=3072 --driver=docker  --container-runtime=crio: (50.052042542s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-085473
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-085473: (1.664943914s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-085473
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-085473 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-085473 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.73905033s)
helpers_test.go:175: Cleaning up "missing-upgrade-085473" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-085473
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-085473: (2.381645615s)
--- PASS: TestMissingContainerUpgrade (98.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-444917 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-444917 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (76.109208ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-444917] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-5815/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-5815/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (36.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-444917 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-444917 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.634625775s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-444917 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (36.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-078032 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-078032 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (1.127102876s)

                                                
                                                
-- stdout --
	* [false-078032] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21724
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21724-5815/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-5815/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1010 18:12:38.605462  181833 out.go:360] Setting OutFile to fd 1 ...
	I1010 18:12:38.605670  181833 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:12:38.605678  181833 out.go:374] Setting ErrFile to fd 2...
	I1010 18:12:38.605682  181833 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1010 18:12:38.605935  181833 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21724-5815/.minikube/bin
	I1010 18:12:38.606434  181833 out.go:368] Setting JSON to false
	I1010 18:12:38.607600  181833 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-11","uptime":3299,"bootTime":1760116660,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1010 18:12:38.607689  181833 start.go:141] virtualization: kvm guest
	I1010 18:12:38.612685  181833 out.go:179] * [false-078032] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1010 18:12:38.613939  181833 notify.go:220] Checking for updates...
	I1010 18:12:38.613972  181833 out.go:179]   - MINIKUBE_LOCATION=21724
	I1010 18:12:38.615008  181833 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1010 18:12:38.616098  181833 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21724-5815/kubeconfig
	I1010 18:12:38.618558  181833 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21724-5815/.minikube
	I1010 18:12:38.622189  181833 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1010 18:12:38.627316  181833 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1010 18:12:38.712604  181833 config.go:182] Loaded profile config "NoKubernetes-444917": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:12:38.712756  181833 config.go:182] Loaded profile config "force-systemd-env-518163": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:12:38.712985  181833 config.go:182] Loaded profile config "offline-crio-416783": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1010 18:12:38.713127  181833 driver.go:421] Setting default libvirt URI to qemu:///system
	I1010 18:12:38.735200  181833 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1010 18:12:38.735295  181833 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1010 18:12:39.030014  181833 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:63 SystemTime:2025-10-10 18:12:38.786981797 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1010 18:12:39.030217  181833 docker.go:318] overlay module found
	I1010 18:12:39.193213  181833 out.go:179] * Using the docker driver based on user configuration
	I1010 18:12:39.254682  181833 start.go:305] selected driver: docker
	I1010 18:12:39.254712  181833 start.go:925] validating driver "docker" against <nil>
	I1010 18:12:39.254749  181833 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1010 18:12:39.360313  181833 out.go:203] 
	W1010 18:12:39.443729  181833 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1010 18:12:39.551845  181833 out.go:203] 

                                                
                                                
** /stderr **
E1010 18:12:44.103997    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:88: 
----------------------- debugLogs start: false-078032 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-078032

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-078032

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-078032

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-078032

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-078032

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-078032

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-078032

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-078032

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-078032

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-078032

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078032"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078032"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078032"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-078032

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078032"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078032"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-078032" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-078032" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-078032" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-078032" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-078032" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-078032" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-078032" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-078032" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078032"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078032"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078032"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078032"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078032"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-078032" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-078032" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-078032" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078032"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078032"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078032"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078032"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078032"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-078032

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078032"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078032"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078032"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078032"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078032"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078032"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078032"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078032"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078032"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078032"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078032"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078032"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078032"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078032"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078032"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078032"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078032"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-078032"

                                                
                                                
----------------------- debugLogs end: false-078032 [took: 6.913714247s] --------------------------------
helpers_test.go:175: Cleaning up "false-078032" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-078032
--- PASS: TestNetworkPlugins/group/false (8.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (28.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-444917 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-444917 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.034330191s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-444917 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-444917 status -o json: exit status 2 (307.389775ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-444917","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-444917
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-444917: (3.551904296s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (28.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-444917 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-444917 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (5.305704995s)
--- PASS: TestNoKubernetes/serial/Start (5.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-444917 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-444917 "sudo systemctl is-active --quiet service kubelet": exit status 1 (266.658875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-444917
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-444917: (1.271502945s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-444917 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-444917 --driver=docker  --container-runtime=crio: (6.830514215s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-444917 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-444917 "sudo systemctl is-active --quiet service kubelet": exit status 1 (340.708816ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (67.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3160024506 start -p stopped-upgrade-839433 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3160024506 start -p stopped-upgrade-839433 --memory=3072 --vm-driver=docker  --container-runtime=crio: (50.769863833s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3160024506 -p stopped-upgrade-839433 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3160024506 -p stopped-upgrade-839433 stop: (2.367771437s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-839433 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-839433 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (14.078948877s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (67.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-839433
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.93s)

                                                
                                    
x
+
TestPause/serial/Start (40.74s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-950227 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-950227 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (40.738711219s)
--- PASS: TestPause/serial/Start (40.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (70.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-078032 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-078032 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m10.62190116s)
--- PASS: TestNetworkPlugins/group/auto/Start (70.62s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.68s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-950227 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-950227 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.671673352s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (40.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-078032 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-078032 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (40.963389073s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (40.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (47.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-078032 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-078032 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (47.488533123s)
--- PASS: TestNetworkPlugins/group/calico/Start (47.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-7bwwl" [ab83ed70-9219-4d0d-84ed-3351bfe88151] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.025972073s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-078032 "pgrep -a kubelet"
I1010 18:17:09.973905    9354 config.go:182] Loaded profile config "auto-078032": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-078032 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-s7grh" [69427552-1840-4598-842e-d9c5006b36c4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-s7grh" [69427552-1840-4598-842e-d9c5006b36c4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.002955167s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-078032 "pgrep -a kubelet"
I1010 18:17:12.226697    9354 config.go:182] Loaded profile config "kindnet-078032": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-078032 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kvnhp" [36c8f3a9-31a3-455e-ac9d-f1ea9895376a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kvnhp" [36c8f3a9-31a3-455e-ac9d-f1ea9895376a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004065094s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-078032 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-078032 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-078032 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-078032 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-078032 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-078032 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-97wlt" [62c2be02-2110-4298-90da-c42c190f2c71] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-97wlt" [62c2be02-2110-4298-90da-c42c190f2c71] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004445909s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-078032 "pgrep -a kubelet"
I1010 18:17:29.089045    9354 config.go:182] Loaded profile config "calico-078032": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-078032 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hp5n9" [8865517e-82fb-4037-9af9-e4d579f52aa7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hp5n9" [8865517e-82fb-4037-9af9-e4d579f52aa7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.004578015s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-078032 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-078032 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-078032 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (51.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-078032 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-078032 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (51.099994454s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (51.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (42.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-078032 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1010 18:17:44.104807    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/addons-594989/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-078032 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (42.295396828s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (42.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (48.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-078032 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-078032 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (48.50717269s)
--- PASS: TestNetworkPlugins/group/flannel/Start (48.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-078032 "pgrep -a kubelet"
I1010 18:18:24.357553    9354 config.go:182] Loaded profile config "enable-default-cni-078032": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-078032 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lzcc7" [9d20de39-550b-4936-9ce7-2afb6d7185b7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lzcc7" [9d20de39-550b-4936-9ce7-2afb6d7185b7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003058111s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-078032 "pgrep -a kubelet"
I1010 18:18:31.636425    9354 config.go:182] Loaded profile config "custom-flannel-078032": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-078032 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-f72cl" [edb9f43d-06b1-45d1-8570-dd2c4cd9818f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-f72cl" [edb9f43d-06b1-45d1-8570-dd2c4cd9818f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.00394627s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-078032 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-078032 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-078032 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-078032 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-078032 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-078032 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-dnfjt" [3e1de68a-3249-4cf9-806e-874bbb916ccd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003965144s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (63.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-078032 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-078032 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m3.933285089s)
--- PASS: TestNetworkPlugins/group/bridge/Start (63.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-078032 "pgrep -a kubelet"
I1010 18:18:53.912166    9354 config.go:182] Loaded profile config "flannel-078032": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-078032 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8f7x5" [92bd6fa9-8665-4abf-a1b9-65514a3ffac3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8f7x5" [92bd6fa9-8665-4abf-a1b9-65514a3ffac3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.003773542s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (53.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-141193 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1010 18:19:03.192103    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/functional-728643/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-141193 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (53.812643906s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (53.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-078032 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-078032 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-078032 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (57.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-556024 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-556024 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (57.423096491s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (57.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (43.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-472518 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-472518 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (43.076397822s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (43.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-141193 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a052a617-10eb-4b35-8da3-41ed530a6878] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a052a617-10eb-4b35-8da3-41ed530a6878] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003375818s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-141193 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-078032 "pgrep -a kubelet"
I1010 18:19:56.856590    9354 config.go:182] Loaded profile config "bridge-078032": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-078032 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-g6z8k" [29875e09-981c-4f79-9ff1-4e6b41848fd2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-g6z8k" [29875e09-981c-4f79-9ff1-4e6b41848fd2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003672008s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-078032 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-078032 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-078032 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-141193 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-141193 --alsologtostderr -v=3: (16.087980299s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-472518 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f2253e59-f3f8-418a-a22e-e99da86065fd] Pending
helpers_test.go:352: "busybox" [f2253e59-f3f8-418a-a22e-e99da86065fd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f2253e59-f3f8-418a-a22e-e99da86065fd] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004042019s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-472518 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-556024 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b9a243ca-7dc2-4e63-b0f7-7824f64e43f0] Pending
helpers_test.go:352: "busybox" [b9a243ca-7dc2-4e63-b0f7-7824f64e43f0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b9a243ca-7dc2-4e63-b0f7-7824f64e43f0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.003442407s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-556024 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-141193 -n old-k8s-version-141193
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-141193 -n old-k8s-version-141193: exit status 7 (74.383581ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-141193 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (47.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-141193 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-141193 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (47.14570615s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-141193 -n old-k8s-version-141193
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (47.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (17.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-472518 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-472518 --alsologtostderr -v=3: (17.723699388s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (17.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-821769 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-821769 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (41.030033173s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-556024 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-556024 --alsologtostderr -v=3: (16.256997893s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-472518 -n embed-certs-472518
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-472518 -n embed-certs-472518: exit status 7 (73.597019ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-472518 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (53.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-472518 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-472518 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (52.885307913s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-472518 -n embed-certs-472518
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (53.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-556024 -n no-preload-556024
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-556024 -n no-preload-556024: exit status 7 (86.411503ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-556024 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (49.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-556024 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-556024 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (48.787443353s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-556024 -n no-preload-556024
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (49.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-821769 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2756f9b0-fbc0-4e80-9636-d7ae1972908b] Pending
helpers_test.go:352: "busybox" [2756f9b0-fbc0-4e80-9636-d7ae1972908b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2756f9b0-fbc0-4e80-9636-d7ae1972908b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.0038406s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-821769 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-g8lm9" [b471ecc7-c8aa-40fd-bbe2-b16f4f36530f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003966206s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-g8lm9" [b471ecc7-c8aa-40fd-bbe2-b16f4f36530f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003372395s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-141193 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-821769 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-821769 --alsologtostderr -v=3: (18.585604019s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-141193 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (29.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-121129 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-121129 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (29.026581676s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (29.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-75n29" [2209ed8b-b88a-45f4-a57a-36decaa54d79] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002983547s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-f6cpg" [7fc23f43-736d-4b79-8552-95e649ee5d9f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003524443s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-821769 -n default-k8s-diff-port-821769
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-821769 -n default-k8s-diff-port-821769: exit status 7 (75.123842ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-821769 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-821769 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-821769 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (48.222742815s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-821769 -n default-k8s-diff-port-821769
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-75n29" [2209ed8b-b88a-45f4-a57a-36decaa54d79] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004320811s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-556024 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-f6cpg" [7fc23f43-736d-4b79-8552-95e649ee5d9f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00281682s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-472518 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-556024 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-472518 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-121129 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-121129 --alsologtostderr -v=3: (2.486729655s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-121129 -n newest-cni-121129
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-121129 -n newest-cni-121129: exit status 7 (66.291695ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-121129 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-121129 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1010 18:22:05.815289    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/kindnet-078032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 18:22:05.821676    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/kindnet-078032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 18:22:05.833026    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/kindnet-078032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 18:22:05.854408    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/kindnet-078032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 18:22:05.895806    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/kindnet-078032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 18:22:05.977275    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/kindnet-078032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 18:22:06.138795    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/kindnet-078032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 18:22:06.460468    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/kindnet-078032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 18:22:07.102286    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/kindnet-078032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 18:22:08.384201    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/kindnet-078032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 18:22:10.208317    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/auto-078032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 18:22:10.214998    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/auto-078032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 18:22:10.226817    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/auto-078032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 18:22:10.248205    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/auto-078032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 18:22:10.289619    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/auto-078032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 18:22:10.371075    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/auto-078032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 18:22:10.532348    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/auto-078032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 18:22:10.853627    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/auto-078032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 18:22:10.946131    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/kindnet-078032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 18:22:11.495421    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/auto-078032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 18:22:12.777375    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/auto-078032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-121129 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (10.038529235s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-121129 -n newest-cni-121129
E1010 18:22:15.338731    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/auto-078032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-121129 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mb49v" [25ca2305-7568-48a1-bd71-8dbb16bb832b] Running
E1010 18:22:25.336764    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/calico-078032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 18:22:26.309206    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/kindnet-078032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 18:22:27.898847    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/calico-078032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1010 18:22:30.701888    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/auto-078032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002966942s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-mb49v" [25ca2305-7568-48a1-bd71-8dbb16bb832b] Running
E1010 18:22:33.021087    9354 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21724-5815/.minikube/profiles/calico-078032/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00290892s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-821769 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-821769 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    

Test skip (26/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-078032 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-078032

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-078032

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-078032

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-078032

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-078032

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-078032

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-078032

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-078032

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-078032

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-078032

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078032"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078032"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078032"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-078032

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078032"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078032"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-078032" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-078032" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-078032" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-078032" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-078032" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-078032" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-078032" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-078032" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078032"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078032"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078032"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078032"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078032"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-078032" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-078032" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-078032" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078032"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078032"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078032"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078032"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078032"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-078032

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078032"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078032"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078032"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078032"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078032"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078032"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078032"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078032"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078032"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078032"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078032"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078032"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078032"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078032"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078032"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078032"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078032"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-078032"

                                                
                                                
----------------------- debugLogs end: kubenet-078032 [took: 3.549294568s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-078032" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-078032
--- SKIP: TestNetworkPlugins/group/kubenet (3.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-078032 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-078032

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-078032

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-078032

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-078032

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-078032

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-078032

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-078032

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-078032

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-078032

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-078032

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078032"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078032"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078032"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-078032

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078032"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078032"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-078032" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-078032" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-078032" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-078032" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-078032" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-078032" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-078032" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-078032" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078032"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078032"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078032"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078032"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078032"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-078032

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-078032

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-078032" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-078032" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-078032

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-078032

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-078032" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-078032" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-078032" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-078032" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-078032" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078032"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078032"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078032"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078032"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078032"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-078032

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078032"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078032"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078032"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078032"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078032"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078032"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078032"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078032"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078032"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078032"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078032"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078032"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078032"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078032"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078032"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078032"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078032"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-078032" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-078032"

                                                
                                                
----------------------- debugLogs end: cilium-078032 [took: 3.082941357s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-078032" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-078032
--- SKIP: TestNetworkPlugins/group/cilium (3.24s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-523797" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-523797
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard